With the current severe shortage of engineering talent, development managers are very pressed to get the most productivity they can out of the resources they have. While there is no one new silver bullet technology to do this there are some steps which can bring dramatic performance improvement for their team while also improving the quality of their software product and maximizing its competitive advantage in the marketplace. I have carefully observed and documented the implementation of one particular practice which has widespread popularity in the software world. I have found that skilled developers are adding 5 to even 17 hours to a one hour task to do something which, while popular, often reduces rather than improving the total system quality, productivity, and competitive position. I want to provide an overview of how to turn this around and regain dramatic productivity increase.
Many buzzwords and concepts have become popular in the Software industry over the last several decades. Some have been helpful, at least at some level, in moving the state of Software Engineering forward. Others have ended up being nothing more than a distraction. However, others have proven to be counterproductive. Practices which could be very helpful in some cases may likewise be counterproductive when misapplied. The issue of software quality, productivity, and competitive position has been important to me for many years. My early study of W Edwards Deming’s ‘Quality Revolution’ in Japan guided me to some approaches that brought amazing results. They also kept me from some counter-productive approaches which seem to be all too common today. I will share some highlights from what has worked well for me including a unique example with striking results from side-by-side development projects. I will also share some of the popular misusages of quality methods to watch out for. With this background I will then present key principles to follow to improve software quality, while maximizing the productivity of the development team and the competitive position of the software being developed.
Years ago when I was still pursuing my software degree, I took a statistics class which required that I submit a paper on the application of statistics in the industry. Thankfully I chose to base my paper on W. Edwards Deming’s book, “Quality, Productivity, and the Competitive Position”. Many call Deming the ‘Father of the Quality Revolution’ and credit him with ‘turning Japan from a purveyor of cheap trinkets into the global leader in quality manufacturing that it has been for so many years now’. I found many of Deming’s approaches to be surprising and very different from approaches I have seen widely used for increasing quality. Deming brought solid proven theory based on statistical models. He has had very impressive results with global impact. At the time I had a summer job working in a research lab for a biomedical manufacturer. On their production floor I saw dramatic examples of the approach to quality so common in U.S. manufacturing at the time. Large investments were made in setting up test rigs and doing extensive ‘unit testing’ of each step of the manufacturing process. They produced wonderful charts and graphs showing how they were improving their quality. However, their end results were abysmal. They had a high and constant failure rate in the final product. Deming explained that this approach and result are very common. His approaches were very different and produced such dramatically higher quality that Japanese companies which applied them took over entire industries. Meanwhile Detroit, a crown jewel of American industry, became a sad testament to the results of following the popular approach.
My first position after graduating from the university was Software Engineering for a large aerospace company. There were about half a dozen teams each doing similar development. Each team included software, electrical, RF, and aerospace engineers. Each was focused on taking an existing platform version and upgrading it for about two years. After that the systems would be installed on the airplane, tested, and delivered. Each airplane had many computers which interfaced with each other and with various cutting edge devices including some developed by the Electrical and RF engineers on the team. The standard software practice was to take the existing software for the various computers, make additions and changes, testing extensively along the way. After a couple of years of development, the upgraded system was installed onto the airplane. Typically, after months of ‘flight line integration testing’ was completed, an additional contract for 4-6 months would be negotiated in order to fix all the defects. In this environment I decided to try applying Deming’s manufacturing quality principles to software engineering.
Stepping out of the norm in such a highly structured and traditional company on multi-million dollar contracts was very risky and viewed with great skepticism. However, I started small and used the success from that to take larger and more bold steps. This started with minor changes in a few software processes, to restructuring entire code files, to developing new frameworks for entire computers and modifying almost all the existing code files, to restructuring the integration of all the many computers and equipment. The results were powerful and dramatic. Features, quality, productivity, maintainability, and agility dramatically increased while costs, defects, development time, and code size dramatically decreased. Finally, when our team applied these methods to the majority of the systems on the entire airplane, something amazing happened. After all the computers and signal equipment were integrated in the airplane on the flight line, the full system test only found 3 minor issues! Each was addressed the same day they were found! This was a system which contained many computers each interfacing with various specialized operators as well as many different signal analysis devices, over many different protocols! After extensive testing including full airborne flight testing no further issues could be found. The normal multi-month ‘test and fix’ effort completed early and no follow-on project was required! In addition, multiple last minute features were requested by the customer and granted without budget, schedule, or system stability issues. This was a first for that company and a strong testament to engineering for total system value using Deming’s approaches. Since this project was developed alongside very similar projects which were using traditional methods, it provided a unique and rare side-by-side demonstration.
Following that dramatic success I have gone on to apply the principles in many different contexts, industries, software environments, team sizes, and project types from small startup to large corporate cultures. Larger projects certainly have more potential to gain but the value seems pretty universal. I have also had the opportunity to observe the antithesis. It seems that much of the focus on automated unit testing in recent years follows the approaches Deming warned about more than those he advocated. I have observed developers consistently spending 5 to even 17 times as long developing unit tests for code as they spent developing the code! The code size followed accordingly with dramatically more code for the tests than for the simple routines being tested. Since statistically defects are proportional to lines of code, this effort was very counterproductive. I have also observed that the code needed to be changed to support the test ‘mocking’. The changes added complexity while reducing development and run-time efficiency, understandability, and the ability of the development tools to automatically catch defects. So, again this effort was counterproductive on multiple accounts. As Deming warned, with the focus on testing to catch defects, attention and effort was likewise much less in the areas that would have the most positive impact on total system quality. Further, with the very significant invested effort, structure, and complexity around the unit testing code, the chance for refactoring and refining the system for quality, productivity, and competitive position was greatly reduced. In addition, all of the added complexity at the detailed level made the system less agile and adaptable to change. For example, methods provided by the underlying mock framework had been depreciated but were still being used. Now, the very large test code base needed to be evaluated and changed in a way that still kept each individual test valid. Not only was this being ignored but the depreciated method was being used on new test development ever expanding the tech debt and reducing the validity of all of the tests. Lastly, the unit testing is only focused on code coverage. In some cases a handful of tests were being done. However, these are far from all the possible scenarios which would have to be completed to truly prove through unit testing that the code is defect free. Thus the value of the test is quite limited at best. However, modern test tools provide wonderful graphical indications of the progress of ‘test coverage’ and code passing those test giving the developer a great feeling that they are developing quality code since they could make lots of green flags appear. As these indicators flow onto management charts and up the chain it can likewise provide a sense of quality that is not based on true market feedback or even end-to-end testing. All of this can work against total system quality, productivity of the development team, and the competitive position of the end product.
Deming has powerfully demonstrated on a global economic scale for manufacturing and I have demonstrated in the scope of my varied but personal experience across 30 years in the software world that there are two major approaches to quality. One focuses on testing to find defects. This often includes a focus on tools and measures which provide lots of data to feed impressive charts and graphs. Unit testing is often emphasized in this approach since it provides a finer grain source of data to feed the graphs. This gives a more immediate gratification and sense of accomplishment and progress. This approach currently commands great popularity and has great tooling support in the software world. The other approach focuses on total system quality, development productivity, and the competitive position of the final product or service. This requires more analysis and design. The rewards are not as immediate but are much more substantial. User Acceptance, Integration, and Regression testing are more important here. Unit testing may be used by this approach, but only where it provides true value, for large complex algorithms. Defects are viewed as a warning indicator that something went wrong in some aspect of the engineering process. Understanding where the process broke down and fixing that is even more important than fixing the defect. The goal here is constant improvement of the ever evolving system and processes as well as the knowledge base of the team. The results from this approach in the manufacturing and software worlds are quite different from those of the test focused approach in each industry.
Five key skills are important to truly improve software quality, while maximizing the productivity of the development team and the competitive position of the software being developed. First and foremost everyone involved must keep 1st things 1st and keep a primary focus on maximizing total system value. With that focus everyone must seek to manage the key constraints of systems; change, complexity, quality, and cost. From the high level to the detailed implementation of all aspects of the system and processes everyone will be seeking opportunities for effective reuse and refinement. Statistical principles, even if applied informally, must be used to drive design decisions, not popular buzz-words, personal preferences, or other factors. Testing approaches can be a key tool to build total system value when they are applied appropriately. A team that embraces all of these aspects will see their software quality, productivity, and competitive position dramatically increase.
Our primary focus as a development team must be to provide the maximum perceived and actual value to customers and other stakeholders in the short and long term. I have heard so many other things emphasized in the software industry through the years. The primary goal of maximizing value is rarely given focus. Of course everyone assumes that this is understood as focus is placed on other areas which are implied will eventually support this. However, as previously illustrated these other areas of focus often actually greatly work against this. You will get what you measure and focus on. The closer your measures and constant focus are to this ultimate goal, the less chance exists for counterproductive efforts to grow. The core value that the company’s products and services provide in the marketplace should be understood by the entire team. The short and long term perspective of this should be understood. How quality affects this needs to be evaluated. The sub-components which make up this overall value component should be understood by each employee as well as understanding how their part supports the whole. Then, each person’s focus moves from putting in their hours and doing their assigned task to using every skill they have to support their team and the company. Thus, the value contribution of each person dramatically increases as they become fully engaged. Further, each person begins looking for any counterproductive activities and working with their management to transform them into highly productive activities. Keeping a primary focus on the value components of the company down to the value components of each aspect of the system being developed and maintained provides the drive towards maximizing quality, productivity, and competitive position.
We must carefully consider the key constraints affecting our system development efforts as we seek to maximize value. Each system and environment has a range of constraints to be considered. There are three primary constraints common to large system development which we must keep in focus. These are cost, change, and complexity. We are developing in the context of real world limitations where budget, schedule, and other costs must be constantly considered. An effort which adds some value but has a high cost may well prevent us from other efforts which will produce more results for our limited resources. We are also developing in a world where the marketplace is rapidly changing. Likewise the technology we use is constantly changing. Development projects take significant time. So we must constantly keep an eye to the future as we evaluate the changes impacting us. We must also design our systems to minimize the impact of potential anticipated changes. The ways a system operates changes much more rapidly than the core value it provides. So, focusing on our own core value and that of systems we interface with will help our designs greatly to minimize the impact of change. Complexity within large systems provides significant constraint limiting the value that can be obtained from them. Humans have a limited ability to make effective decisions. Thus we must constantly seek to reduce and limit complexity within our systems. Reducing lines of code and using standard libraries versus custom routines can be powerful approaches to doing this. Skilled engineers can be tempted to pursue new complex approaches and designs for the ‘cool’ factor. However, doing so can greatly limit the ability for others to maintain the system in the future. When a more complex approach truly adds value then documenting the code to explain what is happening can greatly help to reduce the complexity of maintaining it. Designs and documentation which reduce complexity make systems resilient to change, reduce cost, and increase value. As we consider these and other constraints including the skillsets of the team, corporate mandates, and regulatory issues we can best maximize total system value.
The core value that systems and technology provide is reuse. From patterns to tools to routines to skills there are various forms of reuse. However, the primary reason money is invested in engineering a solution is for that solution to be used repeatedly to solve problems. Thus in general the more we can design a part of the system so that it can solve multiple use-cases, the greater value we are gaining from the system. Of course this must be done in a measured way. Incurring great cost for minimal returned value is not wise. Likewise, making something highly complex or resistant to likely changes or violating other constraints for the purpose of minimal reuse is not wise. Using patterns which are popular because someone found they added value in their context but which don’t truly add value in our context is also not wise. However, we must be looking for ways to gain reuse that will truly add value that is worth the various costs in our context. When we find areas that we can use broadly then we can invest much more in improving the quality of that aspect. Likewise, with the tremendous force of change affecting business and technology we must see our systems as constantly evolving. We must always be evaluating if some aspects of the system have become counterproductive to total system value. When we see areas needing change, we must carefully consider the impact of making changes to support them. Changes can have significant unintended consequences that dramatically reduce value. These are risks that must be managed. Maximum system value will be gained in a constantly changing world as we seek to maximize reuse and refinement of our system.
Deming applied statistical principles in powerful ways to guide continuous improvement efforts. The approaches he used are beyond the scope of this summary. The details of his approaches are also beyond the scope of most development efforts. However, just taking the first step to begin even considering using basic statistics to guide decisions can bring dramatic improvements. Look at what factors contribute the most to defects. Consider what percentage of the customers are impacted by a set of code to determine how much effort to put into refining it. Before adding a layer of abstraction in the code, such as a business logic layer, at least estimate how much business logic will end up there, the advantage of having it there versus the cost of the extra abstraction. Evaluate what percentage of developer time is being spent on different activities to see where an optimization is likely to bring returns. Even if you don’t have measures or metrics, training the team to objectively evaluate each step from a statistical perspective is a powerful first step. Cultivating a mindset of critical thinking, objective analysis, and statistical consideration will help greatly to maximize quality, productivity, and the competitive position of the system you are developing.
Utilize testing and defect resolution carefully in order to improve and not limit, both short and long term, total system value. As illustrated earlier, testing efforts can greatly work against total system value. However, when used carefully and appropriately they can be a powerful tool to help build quality, productivity, and competitive position. Great care must be taken to not alter or limit the design of the system to support testing. The system design must remain focused on maximizing total system value based on the dynamic market and technology of the system, not on buzzword compliance, implementing some cool sounding design fad, or any one of a myriad of other distracting goals. Remember that we are seeking to manage cost, complexity and the impact of change. Testing should have the minimum coupling with the implementation of the system possible. Thus User Acceptance testing and Regression testing at the total system level is much more likely to provide maximum benefit and minimum cost than Unit testing in most cases. Unit testing is appropriate where we have a very complex algorithm that takes extensive time to understand. Wrapping that with carefully designed unit testing that truly covers all the edge cases would probably be very appropriate. This would then be included in regressing testing only if that code was changed. However, complex algorithms such as this are rare. Finding defects early in the development life-cycle is far less costly than finding them later. Extensive peer reviews are often a very effective approach to achieve this. When defects are found there must be an effort to identify if anything could be done to prevent similar defects in the future. This may be training or changes in process or changes in design or whatever will help improve the total system quality and thus value. The defects that occur are tools to help us understand our current processes. We must evaluate what they are showing us, from a statistical perspective, to learn and constantly improve.
Teams which apply these approaches carefully and consistently will see significant results. Of course there will be challenges along the way. For example, corporate mandates may include requirements about code test coverage. If this is a firm requirement then an approach which meets the specifics of the requirement while minimizing the negative impact will need to be developed. Different developers may have different perspectives on what level of testing is appropriate. Education and ongoing discussions can help bring everyone onto the same page moving forward. The most critical issue is to curb the extreme over engineering and miss-use of techniques such as unit testing which are causing many customers to be negatively impacted by outstanding issues while developers spend the majority of their time instilling some theoretical quality through mocked unit tests. Moving to a total system value focus will greatly boost our quality, productivity, and competitive position. The productivity gains are likely to more than double or even triple the output of the team in my experience.
Automated unit testing with the assumption that ‘more coverage is better’ are widely viewed as foundational to good software development practices. However, history has shown us that popular approaches don’t always ensure success. Refocusing on to total system value will maximize quality, productivity, and the competitive position.
Below are a sampling of great resources to help bring what I feel is a more balanced perspective.
|Quality Productivity and Competitive Position: W. Edwards Deming:||Deming has written some great books including this which first inspired me to apply his highly successful concepts to software engineering. It could be said that global economies have been impacted by Deming’s approaches.|
|Test Induced Design Damage||A helpful perspective on avoiding a common miss-use of testing approaches|
|Lean Testing or Why Unit Tests are Worse than You Think||Provides an overview from a variety of sources in order to bring a balanced perspective on automated unit tests.|
|Why Most Unit Testing is Waste by James O Coplien||A very well written article with a clear headed perspective on unit testing by one of the pioneers of the Pattern and the Agile movements|
|Segue||His very scholarly segue where he delves into the theoretical reasons testing can become very counterproductive.|
|The No. 1 unit testing best practice: Stop doing it||A Sr SW Architect responsible for an enterprise scale software product with 150 developers refocused their automated testing to prioritize full system tests and only use unit testing where truly appropriate. His conclusion is; “We are now three years into our product and have been getting tremendous value from our automation approach. ”|
|Giving up on test-first development||A helpful summary of the way Test-First warps the design in counterproductive ways.|
|Testing like the TSA – Signal v. Noise||A quick fun read with some helpful perspective and guidelines related to unit test coverage|