My previous Editor's Corner focused on dysfunctional traits I've observed while working on software projects during the past decade. Many traits (such as "Death by Quality") originate from outdated or ineffective development processes. A development process consists of the tasks performed and the artifacts produced when building software.
Effective development processes are essential for producing high quality software, particularly for large-scale systems. However, identifying and instituting good software processes is remarkably elusive. In this Editor's Corner I describe some common problems I've encountered with existing development processes and outline some concrete steps that can help produce better software.
This model assumes that difficult requirements and constraints can be identified and resolved in early phases of the project lifecycle. This is a dubious assumption, unless the problem domain, platforms, and tools are very well understood and stable. When applied to high risk projects that use unfamiliar or untested technology, the Waterfall process almost certainly fails. The main problem is that developers can't foresee all the traps and pitfalls that lurk ahead. As a result, bad design decisions made far upstream become very costly to fix downstream.
The translational approach is well-suited for highly structured, well understood domains (such as compiler parser construction or GUI application builders). However, I've found that these techniques do not scale up yet to more complex domains that possess sophisticated error handling, concurrency, and distribution requirements. One consequence of pursuing the "zero-lines of code" grail is that the contributions of high-quality developers are often devalued until the project goes awry. At this point, it is very expensive and time consuming to actually write software that fixes the problems.
The main problem with process bureaucracies is that they take on a life of their own and become an end in themselves (i.e., acquiring power), rather than being a means to an end (i.e., shipping product). As a consequence, process bureaucracies often become the refuge of technically inept, but politically savvy, individuals who don't appreciate the importance of rewarding high quality technical staff. Ultimately, this type of culture drives away the creative, highly skilled technical talent that are so crucial to the long-term success of projects and companies.
As a result, it is essential to develop complex systems using a systematic approach to iterative development (such as the Spiral model). Iterative process models are designed to reduce unpleasant surprises at the tail end of a project by identifying key sources of risk during multiple iteration cycles. Techniques like prototyping or simulation can be applied at each cycle to gain insight into open issues rapidly and reduce development risk.
Support for iterative development and prototyping have been key foundations of good software processes for decades. Back in the 70's, Fred Brooks prophesied the inevitability of software iteration with his second law of software: "Plan to throw the first one away, you will anyway" . This insight is as true today as it was in 1975. Remarkably, while many software organizations give lip service to iterative development, I've found that the less successful ones often try to "swing for the fences" and don't devote adequate resources to prototyping and risk reduction.
I've also found that good development processes emphasize the qualitative aspects of software review more than the quantitative ones. Qualitative reviews focus on design and code inspections by a group of peers, whereas quantitative reviews use automated complexity metrics and tools that check for conformance to coding standards. Qualitative reviews are more beneficial than quantitative reviews because they are more effective at identifying and correcting strategic problems in the architecture and implementation before they become firmly entrenched in the software.
I believe that quantitative reviews are appealing largely because they seem to absolve us from having to think carefully about the tough issues. One of the main causes of project failures is that developers don't take time to think through their designs and implementations. All the CASE tools, translational engines, and visual programming environments in the world can't make up for problems that could have been foreseen with more disciplined thinking, analysis, prototyping, and peer review.
A key challenge faced by software organizations is keeping the skill sets of their developers from becoming obsolete. Technology is changing more rapidly than ever, and it's hard to keep abreast of all the new tools, platforms, languages, and techniques. Fortunately, much of this "new technology" largely repackages the concepts, features, and mechanisms that have been part of the software culture for decades. Explicitly recognizing these recurring patterns in our new tools and technologies makes it easier to retrain programmers by building upon knowledge they've already mastered in other development paradigms and platforms.
The study of patterns should be an essential part of any developer education and training program. The study of design patterns helps guide the choices of developers who are building, maintaining, or enhancing software. By understanding the potential traps and pitfalls in their domain, developers can select suitable architectures, protocols, and platform features without wasting time and effort implementing inefficient or error-prone solutions. Likewise, the study of organizational patterns is essential to help guide the choices of projects that are considering different development processes. A wealth of organizational patterns have been documented by our very own Jim Coplien and others in the Pattern Languages of Programming series, which is described in more detail at the WWW URL:
Additional information on upcoming pattern-related topics and events can be obtained at
I've generally found that reverse-engineering tools are more useful than forward-engineering tools. In particular, most forward-engineering CASE tools are inflexible and don't fully implement reverse-engineering. Without adequate support for reverse-engineering, documents and object models produced by these tools become obsolete and unmaintainable. Moreover, expert developers can often build quality software more effectively by writing class interfaces and then using reverse-engineering tools to generate the documentation from the software.
Another effective way to produce and maintain well-documented software is to generate manual pages automatically from source code. Like the reverse-engineering tools, this approach helps ensure that the documentation doesn't drift out of sync with the software. Many programming environments support this, e.g., all the online documentation for Java is generated by the javadoc utility in the Sun JDK. A freely available set of software tools that generate nroff, html, and mif format manual pages from C++ header files are contained within the ACE C++ network programming toolkit at WWW URL:
I'm concerned that our industry is increasingly being driven by the false prophets of Process and Methodology. This leads to simple-minded solutions to complex software development processes, which can cause far more problems than they solve. To counteract this trend, developers must become more proactive in shaping and improving the software processes they perform. However, this topic is becoming too far removed from C++. Therefore, next month's Editor's Corner will focus on tips for building C++ frameworks that are portable across different OS platforms and compilers.
I'd like to thank Jack Reeves, Tim Harrison, Prashant Jain, and Irfan Pyarali for their comments on this editorial.
 F. P. Brooks, The Mythical Man-Month, Addison-Wesley, Reading, MA, 1975.
Back to C++ Report Editorials home page.