Startups and Product Verification Testing


The purpose of product verification testing is to verify the product meets its’ design goals, and is included in any set of best practices. But what really is being verified?

  • Compliance to undocumented or informal assumptions of user expectations?
  • Compliance to documented external requirements, potentially including regulatory requirements?
  • Compliance to internal design goals, possibly including a code standard, static code analysis, and unit testing?
  • Confirm buttons can be pressed in any sequence and nothing bad happens?
  • Combinations of the above?

Formal complete verification testing is a valiant goal, but difficulties can be encountered in practice, particularly when funding is limited. A good formal test phase can cost as much in calendar days and labor $$’s as the design phase! A startup may not have the funding and runway for a good test phase, and could even be prejudiced against test by overconfident designers and managers. 

Unless verification test is represented on the project team from its onset, test resources will usually get addressed late in the development phase – once there is something to test. This means developers will first need to spend time onboarding the test resource to a suitable level familiarity with the product and the codebase, and more time during the test process to clarify ambiguous behavior and investigate test failures. A more viable strategy can be to leverage the design team’s intimate knowledge of their system in the test effort.

I like to think of product verification testing as a stool with three legs, one leg is automated regression testing, one leg is feature-specific testing, and the final leg is ad hoc “try to break it” attempts.

  • Use an automated test to catch obvious blunders with minimum effort, and run the test on every release candidate, every night, or even after every commit to the code repo. Creating the test will unavoidably take time from the development team, but often the time is not significantly more than that needed to on-board a developer-level test resource, explain requirements and validate the created test. 
  • Use feature-specific tests for verifying areas of concern where there is high risk, or where there is high consequence (such as regulatory compliance). Existing areas of concern by team members can be identified and explored in team whiteboard sessions (e.g. a concern the system may lockup if a user is interacting with the device when the internet connection is lost), and fault tree and/or failure-mode analysis diagrams can be created for further resolution and clarity. It may be practical to involve a temporary resource at this point to off-load the development team (e.g. an engineering or computer science co-op or summer student) and the test phase will have a higher chance of success, instead of muddling about for a month or two due to lack of support and produce no tangible results.
  • Use ad hoc “break-it” testing to find issues that would never be contemplated by the design team. The tester should not be familiar with the design internals, but should preferably be at least familiar with the product domain. As such, it can often be off-loaded to your favorite student again. However, be cautious of expecting useful test feedback from beta testers. Despite best intentions, beta testers are typically not motivated to spent effort in areas of functionality not of direct interest, or to document found issues to the degree needed by the development team.

Separation of design and test, documented requirements, formal verification, etc. have become best practices for a reason, but may not be practical or feasible for a startup chasing their first product. If the startup is successful at creating a revenue stream to protect, it will be able to adopt or refine its practices. If the startup fails due to having built the wrong product, in hindsight clearly resources spent on verification would have been better put towards market intelligence and alpha testing a proof-of-concept.

However, of course this is all IMHO and YMMV….

Selling my Soul?

I received my monthly activity report for dalescott.net traffic from Google today, a thought-provoking top-level summary provided at “no cost” (although some will say it’s already cost me my soul ;-)).

This is technology research for me. Big enterprise expends effort where a return is expected. I follow their lead looking for the concepts that will provide immediate value to smaller businesses, and that can be implemented at a fraction of the cost through participation in Open Source ecosystems. 

Start a Windows MPLAB X Project by Cloning a Git Repo

Microchip’s MPLAB X IDE includes a Git client for committing, pushing upstream, pulling downstream, etc, but not for cloning an existing project from a remote repository. In this post, I’ll show how to use the standard Git install for Windows from the Git project to clone a project from a remote repository, and then the basics for using MPLAB X’s built-in Git client for everything else.

I’ll be using MPLAB X on Windows. Although MAC OS X and Linux are also supported for MPLAB X, I don’t use OS X and I’ve been unable to use MPLAB X MCC (the MPLAB Code Configurator) on Linux.

Git GUI

The standard Git install for Windows from the Git project includes Git GUI, a basic Git client GUI app.

Download and install Git, then launch Git GUI.

and select Clone Existing Repository.

Clone the remote repository. In this example, I’m cloning from a Windows network drive.

You could also use the ssh protocol to clone from a remote server, in which case the source location will be something like the following (you will be prompted for your private ssh key passphrase after clicking Clone):

ssh://dalescott@dalescott.net/home/dalescott/sourceProject.X.git sourceProject

After the remote repo has been cloned, Git GUI will show the current status of the new local repository – which should show no modified files (i.e. no files either staged or unstaged).

To confirm the repository was cloned correctly, you can check the commit history by accessing menu Repository > Visualize master’s History.

Open project in MPLAB X

Now you can open your cloned project in MPLAB X.

Git is primarily accessed using a project’s context menu from the Project window.

However, be aware there are a few Git commands under the Team menu, including a Repository Browser that aren’t available in the project context menu.

Try accessing Show History for the project (you will need to click “Search” in the Show History view that opens).

Now I’ll ready to start hacking on the source. I’ll keep it simple and just add some TODO’s to the README.txt file. Notice how the editor shows my changes compared to the last commit in real-time!

When I’m ready to commit my changes, I can commit just the modified README.txt file,

or I can commit all modified files in the project.

To push a local commit back to the upstream origin repo, use the general-purpose Remote > Push… selection from the Project context menu.



Origin is the only remote repo configured in the local clone, so it should be indicated automatically in the Push to Remote Repository wizard.

There, all done! Wasn’t that easy?

I didn’t do show it, but standard practice is to first Pull from the upstream repo (e.g. origin) before you try pushing a local commit to the upstream repository. If someone has pushed code since your last pull (or clone), you’ll need to merge their changes into your local project first, then push the combined changes to the upstream repo.

Why is Embedded Product Development Different?

Effective embedded product development involves coordinating a large number of tasks in complex relationships, managing risk and dealing with issues that arise due to uncertainty. Development teams are multi-disciplinary and can involve industrial and user experience design, mechanical design, electronics design, embedded, desktop, mobile and cloud software development, product verification and validation testing, and manufacturing process development, each with its own unique processes and workflows. Put simply, the effort needed to bring a complex high-tech networked embedded product to the world should not be underestimated.

Regulatory requirements must be met before a product can be placed for sale, including meeting electrical emission and compliance regulations, safety related requirements that may impose specific product requirements or following specific development processes, and environmental requirements affecting component selection and recycling of packaging and eventually the product itself. Managing regulatory requirements and demonstrating compliance is a critical part of the development process. 

Software development today commonly includes a wide variety of open-source software components, such as operating systems, device drivers, database systems, data encryption, and network communication stacks. Each software component is licensed by its creator, and imposing specific requirements on use. In addition, use of encryption technologies is usually subject to national security regulations. Managing this complex interrelationship of requirements requires careful attention to ensure the final software application will be free of undesired encumbrances and can be distributed legally.

Development of a high-tech product can be a complex undertaking, involving a complex relationship of simultaneous tasks. To be effective, an engineering project manager must not only have related technical experience, but must also provide a suitable balance between attention to detail and time to market, without allowing unacceptable risk or sacrificing quality.

PLM and NPI

Product Lifecycle Management (PLM) describes how a product is consciously managed from concept through design, into manufacturing and sales, and eventually to retirement. PLM includes aspects of Product Management and also Sustaining (aka Maintenance) Engineering to add features, eliminate defects, optimise process, increase quality, etc. PLM integrates people, data, processes and business systems, and provides trust-able and transparent design and manufacturing data. 

New Product Introduction (NPI) is that portion of PLM involved with the hand-over from engineering design to manufacturing and eventually introducing a product into the market for sale. NPI is a broad topic, and depending on the organization, industry and product, may include Design for Manufacturability, Pilot Production, and Validation Testing, as well as creation of marketing and sales materials, managing sales campaigns and events, developing eCommerce and service subscription processes, and whatever else can be involved in bring a product to market.