MQTT represents a paradigm shift in industrial process control communication, is the primary communication protocol for IoT, and is anticipated to be the standard communication protocol for networked industrial process control devices – the IIoT.

A strategic opportunity may exist for an early market technology leader with a proven low-cost operations stack consisting of IIoT MQTT endpoints (with optional “intelligence at the edge”) providing data which directly result in the creation of orders or material movements in an ERP system, at a cost acceptable to SMEs. MQTT is still new technology for process control, and significant interest may also come from large enterprise investigative projects as they learn and find new strategies.

End-user MQTT References:

Video and podcast interviews with Arlen Nipper, co-creator of MQTT.


What is Open-Source?

I like to think of Open Source as a legal framework that supports shared software development by a community. The license protects the investment made by the community (time coding, testing, writing bug reports, writing user manuals, supporting other members of the community, etc.), as well as protecting the community from an actor acting in bad faith.

When a company (a legal entity) develops software, the code generally belongs to the company, regardless of whether the programmer is a full-time employee or a contractor, and the developer is not entitled to a single character. Of course, developers still own the knowledge in their heads, and they can use that knowledge to re-create equivalent functionality (although other contractual restrictions, such as non-competition and non-disclosure agreements, may still apply).

A single developer providing software under an open source license is generally simply being altruistic and helping their fellow developers by providing tutorials, examples and proof-of-concepts consistent with a single developer project. The developer’s personal beliefs will direct the license, either “permissive” (e.g. the BSD license) or “copyleft” (e.g. the GPL).

A permissive license might be appropriate if the goal is to provide benefit to as many people as possible with minimum constraints (a permissive license typically only imposes keeping the original license and copyright notice, and prevents being sued for errors or not being fit for use).

However, if you believe someone who modifies your code has an obligation to share in kind, then imposing this through a copyleft license will likely be appropriate. Note though that the GPL only requires source to be shared with those who receive the software in non-source form, which may not include the original developer! Payment is irrelevant as far as the license is concerned.

The license becomes more significant for projects with multiple developers, projects that incorporate open-source software to expedite development, and companies who use the software. Vague ownership of the code or vague statements of allowed use creates risks for the entire community.

What if two developers work together for a year to create a software application but then part ways. Who owns the rights to the codebase? Do the developers share ownership jointly or individually? Can one developer continue development of the software on their own if the other developer doesn’t want them to? Does each developer have rights only to the characters they typed?

Vague ownership is also a risk for users if the project doesn’t have the legal right to offer the software, which could caused by something as simple as including a GPL library in a BSD project, using code available publically without authorization from the author, or a developer re-using old project code that was written for an employer instead of recreating it from scratch. If the software is an enterprise ERP, CRM or FRACAS system, extracting it from the business could be difficult and costly. Any company that proactively protects shareholders from risky legal situations would have to just walk away before even starting.

Licensing itself isn’t complicated, it’s all the other details…. 😉

The 150% BOM

Over the next couple weeks, I will be exploring the %150 BOM concept and how the concept can be implemented in ERPNext.

I recently encountered the “150% BOM” and had to look deeper.

The 150% BOM concept is a method of managing products that are essentially variations on a common theme, such as a car that is available pained either blue or red. You could manage the cars as completely separate products (albeit that share a lot of common components), but are truly different products. A 150% BOM includes all the variants of a product, from which specific BOMs – the 100% BOMs – are created.

Here is a simple 150% BOM for the marketing level of the SCC Aircraft Wireless product. Three different lengths for antenna and ground wires are provided in the BOM, and a specific length of must be selected to create an order.



Stay tuned, the next update will describe implementation options in ERPNext.

Startups and Product Verification Testing

The purpose of product verification testing is to verify the product meets its’ design goals, and is included in any set of best practices. But what really is being verified?

  • Compliance to undocumented or informal assumptions of user expectations?
  • Compliance to documented external requirements, potentially including regulatory requirements?
  • Compliance to internal design goals, possibly including a code standard, static code analysis, and unit testing?
  • Confirm buttons can be pressed in any sequence and nothing bad happens?
  • Combinations of the above?

Formal complete verification testing is a valiant goal, but difficulties can be encountered in practice, particularly when funding is limited. A good formal test phase can cost as much in calendar days and labor $$’s as the design phase! A startup may not have the funding and runway for a good test phase, and could even be prejudiced against test by overconfident designers and managers. 

Unless verification test is represented on the project team from its onset, test resources will usually get addressed late in the development phase – once there is something to test. This means developers will first need to spend time onboarding the test resource to a suitable level familiarity with the product and the codebase, and more time during the test process to clarify ambiguous behavior and investigate test failures. A more viable strategy can be to leverage the design team’s intimate knowledge of their system in the test effort.

I like to think of product verification testing as a stool with three legs, one leg is automated regression testing, one leg is feature-specific testing, and the final leg is ad hoc “try to break it” attempts.

  • Use an automated test to catch obvious blunders with minimum effort, and run the test on every release candidate, every night, or even after every commit to the code repo. Creating the test will unavoidably take time from the development team, but often the time is not significantly more than that needed to on-board a developer-level test resource, explain requirements and validate the created test. 
  • Use feature-specific tests for verifying areas of concern where there is high risk, or where there is high consequence (such as regulatory compliance). Existing areas of concern by team members can be identified and explored in team whiteboard sessions (e.g. a concern the system may lockup if a user is interacting with the device when the internet connection is lost), and fault tree and/or failure-mode analysis diagrams can be created for further resolution and clarity. It may be practical to involve a temporary resource at this point to off-load the development team (e.g. an engineering or computer science co-op or summer student) and the test phase will have a higher chance of success, instead of muddling about for a month or two due to lack of support and produce no tangible results.
  • Use ad hoc “break-it” testing to find issues that would never be contemplated by the design team. The tester should not be familiar with the design internals, but should preferably be at least familiar with the product domain. As such, it can often be off-loaded to your favorite student again. However, be cautious of expecting useful test feedback from beta testers. Despite best intentions, beta testers are typically not motivated to spent effort in areas of functionality not of direct interest, or to document found issues to the degree needed by the development team.

Separation of design and test, documented requirements, formal verification, etc. have become best practices for a reason, but may not be practical or feasible for a startup chasing their first product. If the startup is successful at creating a revenue stream to protect, it will be able to adopt or refine its practices. If the startup fails due to having built the wrong product, in hindsight clearly resources spent on verification would have been better put towards market intelligence and alpha testing a proof-of-concept.

However, of course this is all IMHO and YMMV….