Testers: Put on Your End-user Hat
Testers: Put on Your End-user Hat
The more you know about the end-user, the more effective you will be as a tester. Here are some tips for adding value by thinking like your customer.
One of the biggest criticisms about testers and QA organizations is that they do not understand the business or the end-user. If you believe this to be true, it sends a clear message about not having a value-adding testing team of professionals. The more you know about the ultimate customer or end-user, the more you will become an effective, risked-based tester.
When I led QA teams in the past, I made "knowing your customer" a major performance criteria for my staff. To ensure this, I arranged field trips with business development to customer sites and had the testing team view how and why the end-users actually used the system or application. Upon returning from these field trips, the QA team started to modify how it approached end-to-end and user acceptance tests. It was so beneficial that the number of critical end-user defects dropped by more than 20 percent in a very short period of time.
This result inspired me to continue my learning. I took the product management certification course from Pragmatic Marketing and was certified in pragmatic product management in December 2009. From the course, I learned how understanding the following questions will increase the effectiveness of tests and testing teams (note: It is your responsibility to ensure you are adding value to the delivery of the product):
- What problem or problems will this upgrade, enhancement, or new feature solve? This is the value proposition.
- For whom do we solve that problem? This is the target market.
- How will we measure success? This is the business result. What metrics will be needed to validate success has been attained?
- What alternatives are out there? What is the competition doing? Is it a "blue ocean industry” or a "red ocean industry”?
- Why are we best suited to pursue this? What is our differentiator?
- Why now, and when does it need to be delivered? This is the market window or window of opportunity.
- How will we deploy this? What will be our deployment strategy?
- What is the preliminary estimated cost and benefit? Will there be a return on investment, customer satisfaction increase, or cost avoidance?
If you understand these high-level questions, you will ensure a higher level of end-user quality by designing and executing tests from an end-user’s perspective. Defining, quantifying, and weighing the many quality dimensions as perceived by your end-users, you will be able to approach testing in a very efficient and effective manner. Knowing what the user wants and needs to do with the system will enable a proactive mindset regarding requirements and feature reviews, acceptable behaviors, operational inconsistencies, interactions, and interoperability.
I have found the user manual to be a great source of knowledge for a test team. Granted, a newly developed application is devoid of a manual, as the manual gets developed along with the application. But, during my independent consulting years, I relied heavily on these manuals to gain an operational business perspective. Be careful, though, as they can be dated and may become stale depending upon how much the end-user relies upon them.
This perspective naturally leads to an understanding of where the potential risks are to the business:
- What are the most common and critical areas of the functionality from the user’s point of view?
- How accepting should the system be to incorrect input
- Is the description complete enough to proceed to design, implement, and test the requirement processes?
- What is an acceptable response time for the users? Are there stated performance requirements?
- Which problems and risks may be associated with these requirements?
- Are there limitations in the software or hardware?
- Which omissions are assumed directly or indirectly?
Regardless of the delivery methodology you are involved with (iterative, agile, waterfall, etc.) the above will be applicable. A test strategy should be unaffected by the delivery method. It should be a relative constant, setting the goals, objectives, and approach. Your test strategy should articulate test governance and other standard operating procedures that the delivery team will be following, including but not limited to configuration, release, source code, defect, and test case management. Any test tools and test execution tools that will be deployed in the delivery of the application or system should also be identified. Clearly articulated expectations will be how the team will measure success and what defines "done." Without this, how will the team know when good is good enough? This document should also include a contingency plan for any regression that might take place. What severity and quantity of defects will be deemed acceptable, and how will the defects be handled with remaining deployments? Additionally, a test strategy needs to state what test metrics will be used and how they will be reported, communicated, and escalated when results display negative trends in the acceptance criteria.
The test plan should document the techniques and methods that will be employed to validate the system under test. The test plan should detail the estimates of the test cycles in relation to the delivery plan. A test estimation algorithm and how one derived these estimates are best guesses at this point, but be sure these assumptions are reviewed with the delivery team.
The same holds true for the development team. If the project is to be delivered in iterations, then naturally the team will jointly develop the estimated costs and duration. It is important to highlight to the delivery team the estimated defect rate for the delivery. As this rate is approached or exceeded, the impact to remaining deployments and regression can cause not only the timeline to skew but also costs to escalate. Feedback from these defect trends should trigger a re-estimation of remaining iterations or sprints, thereby increasing accuracy and confidence. The methods and techniques documented in the plan will support the estimation of costs and duration. Examples of these techniques include requirements-based, combinatorial, model, and scenario-based testing. Each technique has unique attributes that will be associated with the various levels of structural and functional testing. Test estimates will be challenged, so teams need to stay focused. Should all business-critical features and functions have more than 90 percent test case permutations covering the various combinations of valid, error, and user profiles? Do all the users view the feature set similarly and agree about what is critical to the operation of their needs and business?
Deciding at what level to stop testing is difficult, but there is the ever-present law of diminishing returns. Review the approach and risks with the business and product owners, and gain their insight into where there could be excessive testing and what is an acceptable level of risk. As a value-adding testing team, you must quantify the costs by articulating the number of test cases and how long it will take to deliver the quality the end-user is expecting. Understand where the greatest risks to the business exist—features frequently used by most or all customers, financial impacts from failure or errors, feature complexity, defect density, and historical data on problem areas or feature sets.
When it all comes together, the team can circle back to those quality dimensions highlighted earlier in this paper. Reliability, usability, and accuracy will manifest in the number of test cases and techniques used to satisfy the level of quality that the end-user is expecting and on which the business owner must plan to spend. Complete transparency enables the team to make sound business decisions and decide on appropriate levels of risk and tradeoffs when plans are not being met. The cost-risk-benefit equation of quality will be used as the team makes adjustments to content time and cost. There should be no surprises when the team is faced with the tough decisions that always arise during the deployment of software.
In the delivery of software, testers can wear many hats. Teams that are able to think like the end-user will make a significant contribution in ensuring that the test team is adding value and focused on meeting the client’s expectations.