"No one is harder on a talented person than the person themselves" - Linda Wilkinson ; "Trust your guts and don't follow the herd" ; "Validate direction not destination" ;

May 29, 2014

Automation, Tools, QA

This is in continuation to previous post on Automation. I wanted to write this post after reading article Testing Trends…or not?

When I had to test ecommerce portal across locales automation was useful to validate happy path, search, order, returns etc...For specific fixes (perf improvements, tab size adjustments, zoom in adjustments, alignments I had to rely on manual validation. Automation can cover positive functional flows. The pain point was ids were changed frequently with every release. There was always a backlog between automation implementation vs current production code. If you refer back to article Stop Writing Automation, author clearly lists the failure points which will not be covered in automation. 

Automation can be broadly classified into
  • BVT Tests
  • Regression Tests
  • Functional Tests
  • In-house Tools for perf tests, set-up
There has been a lot of focus on exploratory testing, context driven testing etc.. My perspective is core of it lies in product centric knowledge. The more you explore / learn about the system, higher the chances to identify critical bugs. Three things that are essential for a QA role are
  • Product Centric Knowledge (Willingness to explore and master the domain / product)
  • Technical Acumen (Know the system functional know, Learn whenever possible)
  • Look at the big picture, Failure points (Here exploratory testing, context driven testing, mind maps will help)
I had worked in DB Dev / Test, UI - Test roles. DB has been more interesting and fascinating than UI :). 

I prefer SDET (profile) mix of both code level / functional tests than purely relying on black box tests. Rotational Software engineer program in Microsoft is a very good example. Fresh Graduates would spend 6 months each in DEV / Test / Support / Product management. Depending on their interest they can finally pick a role of their choice. This model would provide complete picture of Release Cycle. For every role you need to work with respect to the context.  Also, flexibility to adjust to different profile would give you broad range of skills and a better perspective of products / functions.


May 27, 2014

Reading Notes - Test Data Generator, Interesting Reads

Google reader used to be my favourite feed subscription list. After that was removed I tried feedspot, feedly. Both have not provided the same experience but still figuring out alternatives.

Today there were few interesting reads
Note #1 - mockaroo 
  • Live web based Test data generator looks promising to generate test data
  • Null percentage distribution, data type combinations, CSV, sql multiple data formats
  • Real time datatype distribution for near match of test data
  • Rest API exposure for Automated data generation
This tool would have consumed a lot of development time to develop intuitive, real world test data cases. Nice Work!!!

Ref - This post  was useful to reach out to mockaroo tool

More Testdata Generation Tools

Note #3 - Good Infographic on Devlandscape 2014 (Checkout big data, NOSQL and DevOps)

More Reads
Test Data Map Design Pattern
Test Data Loader Design Pattern
JSON Generator , which allows to generate JSON files from a template.
CSV Generator , which allows to generate CSV files.

Happy Reading!!!

May 21, 2014

Learnings in performance testing (JBOSS Tuning, Windows TCP / IP Changes)


Tuning JBOSS performance by increasing the RAM, number of max threads to be used, increasing TCP / IP Connections limit. Below are good references for the same
JBOSS parameters tweaked


File
Property
 Comments
run.bat
"-Xms"

The -Xms value is the space in memory that is committed to the VM at init. The JVM can grow to the size of –Xmx

"-Xmx"

The -Xmx value determines the size of the heap to reserve at JVM initialization
"-XX:MaxPermSize"

..\deploy\jboss-web.deployer\server.xml -> "<Connector>" tag
"maxThreads"
The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200.
"acceptCount"
The maximum queue length for incoming connection requests when all possible request processing threads are in use
"connectionTimeout"
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented
..\conf\jboss-service.xml -> "org.jboss.util.threadpool.BasicThreadPool"
"KeepAliveTime"
How long a thread will live without any tasks in milliseconds
"MaximumPoolSize"
The max number of threads in the pool
"MaximumQueueSize"
The max number of tasks before the queue is full
..\conf\jboss-service.xml -> "org.jboss.remoting.transport.Connector"
"numAcceptThreads"
The number of threads that exist for accepting client connections
"maxPoolSize"
The number of server threads for processing client
"clientMaxPoolSize"
Client side maximum number of active socket connections. This basically equates to the maximum number of concurrent client calls that can be made from the socket client invoker


More Reads

Big List Of 20 Common Bottlenecks


Happy Learning!!!

May 17, 2014

RootConf Day #2 Notes

June 6th Update - All RootConf Session Videos are available in link

Today was Day#2 of RootConf. Some sessions were engaging. Content, Presentation, Connecting with audience was good. Some good learning's for a powerful presentation
  • Creative Quotes (Similar to Quora answers with Pics)
  • From Tweets (Co-relating the context)
  • Movie Stills with modified subject + humour related conversations 
Some presentations / context will remain in our memory due to its impact / situation
Tools
  • ejoson (Secret management), 
  • mesos for resource management  
  • coreos - linux for massive system deployements
  • Ansible - Deployment + Configuration Management + Continuous Delivery
  • citoengine - Alert management and automation tool
  • pacemaker - Server side exploitation software - Python based
  • RobotFramework for Device Automation
  • Linux Profiling Tools - Perf Top, Perf Sched
Two days are full of Open Source related stack. There are open source alternatives to VMWare VSphere, AWS. BrowserStack manages all its hundreds of servers Ansible. Aditya Patawari demonstrated wordpress setup in few clicks.

First session on Security by Anant Shrivastava on Heartbleed bug was good. Demonstration of heartbleed bug was done. 

Session - DDOS mitigation @flipkart by Sameer Garg
Volumetric Attack
  • DNS, SNMP, NTP Amplification
  • SYN Flood
  • Fragmented Packets
App Layer
  • Wordpress Ping back
  • Exploiting HTTP
  • Incomplete requests
Volumetric Attack Mitigation
  • Use Scrubbing farms (3rs Party Mitigation Service)
  • Work with Upstream providers
  • Using BGP
App Layer Mitigations
  • Home grown solutions
  • Scrubbing farms
  • Real time log analysis
  • Identify Standard Patterns
  • Use data to block traffic
Happy Learning!!!

May 16, 2014

RootConf Day #1 Notes

Every Conference is a good learning opportunity in identifying best practices, technology trends, learning opportunities. Predominantly all sessions were open source tools for Dev-Ops, Continuous Integration, Automated Deployments.

Consolidated Set of Open Source Tools Discussed are
For Performance Testing Open Stack
Free Tools to be checked for Windows
  • Nagios for Windows Monitoring
  • Vagrant for Windows
Notes from Sessions
First session was on Building Elastic Infrastructures by Pankaj Kaushal
  • Automated creation of systems
  • Centralized monitoring
  • VM created with HostDB Entry
  • Tools - Puppet, HostDB
HostDB
  • Highly Available / Reliable
  • Namespaces & access controls
  • Rollback / GIT as backend
  • Rest APIs for APPs to interact
Puppet
  • Manage Configuration
  • Define Machine / nodes
Session Quick Prototyping with LXC and Puppet by BENJAMIN KERO
  • Tools CVS, SVN, BZR, Dares, RCS, GIT, Mercurial
  • Provided good comparision for tools Docker, VMWare VSphere, EC2, Linux + Puppet, C (Control) Groups
  • Mercurial, LinuxContainers.org
Session - Avoiding single point of failure in a multi-services architecture
  • Tools used - Sensu monitoring, salt stack, jenkins
Interesting Sites to check
Happy Learning!!!

May 11, 2014

Weekend Reading Notes

Session #1 Netflix's Distributed Computing Strategies: Optimistic Design for the Eventual Consistency Model



Good Netflix Case study on Cassandra for High Performance DB's
  • In a master / slave configuration there is a interval for data sync
  • Early 2000's reads were done on replicated databases 
  • Repair option possible in cassandara
  • MYSQL users - Facebook, Zappos, Symantec etc..
  • FB replays logs across Slave systems 
  • Remove Foreign keys to improve performance
  • Netflix Cassandra cluster (1 Million writes / Reads worked successfully) - More reads link
  • Benchmarking Cassandra Scalability on AWS - Over a million writes per second
  • Pessimistic Design - High Consistency = High Latency, Performance issues 
  • Optimistic Design - Trust Data Store, For 1% or edge cases have contingency plans
  • Example, Amazon (Low Consistency, sometimes sell items not in inventory), Send a polite email, 10% credit for next purchase
Session #2How Python Scripts Power Drones