The purpose of product verification testing is to verify the product meets its’ design goals, and is included in any set of best practices. But what really is being verified?
- Compliance to undocumented or informal assumptions of user expectations?
- Compliance to documented external requirements, potentially including regulatory requirements?
- Compliance to internal design goals, possibly including a code standard, static code analysis, and unit testing?
- Confirm buttons can be pressed in any sequence and nothing bad happens?
- Combinations of the above?
Formal complete verification testing is a valiant goal, but difficulties can be encountered in practice, particularly when funding is limited. A good formal test phase can cost as much in calendar days and labor $$’s as the design phase! A startup may not have the funding and runway for a good test phase, and could even be prejudiced against test by overconfident designers and managers.
Unless verification test is represented on the project team from its onset, test resources will usually get addressed late in the development phase – once there is something to test. This means developers will first need to spend time onboarding the test resource to a suitable level familiarity with the product and the codebase, and more time during the test process to clarify ambiguous behavior and investigate test failures. A more viable strategy can be to leverage the design team’s intimate knowledge of their system in the test effort.
I like to think of product verification testing as a stool with three legs, one leg is automated regression testing, one leg is feature-specific testing, and the final leg is ad hoc “try to break it” attempts.
- Use an automated test to catch obvious blunders with minimum effort, and run the test on every release candidate, every night, or even after every commit to the code repo. Creating the test will unavoidably take time from the development team, but often the time is not significantly more than that needed to on-board a developer-level test resource, explain requirements and validate the created test.
- Use feature-specific tests for verifying areas of concern where there is high risk, or where there is high consequence (such as regulatory compliance). Existing areas of concern by team members can be identified and explored in team whiteboard sessions (e.g. a concern the system may lockup if a user is interacting with the device when the internet connection is lost), and fault tree and/or failure-mode analysis diagrams can be created for further resolution and clarity. It may be practical to involve a temporary resource at this point to off-load the development team (e.g. an engineering or computer science co-op or summer student) and the test phase will have a higher chance of success, instead of muddling about for a month or two due to lack of support and produce no tangible results.
- Use ad hoc “break-it” testing to find issues that would never be contemplated by the design team. The tester should not be familiar with the design internals, but should preferably be at least familiar with the product domain. As such, it can often be off-loaded to your favorite student again. However, be cautious of expecting useful test feedback from beta testers. Despite best intentions, beta testers are typically not motivated to spent effort in areas of functionality not of direct interest, or to document found issues to the degree needed by the development team.
Separation of design and test, documented requirements, formal verification, etc. have become best practices for a reason, but may not be practical or feasible for a startup chasing their first product. If the startup is successful at creating a revenue stream to protect, it will be able to adopt or refine its practices. If the startup fails due to having built the wrong product, in hindsight clearly resources spent on verification would have been better put towards market intelligence and alpha testing a proof-of-concept.
However, of course this is all IMHO and YMMV….