Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban

Click here to view the complete list of archived articles

This article was originally published in the Summer 2006 issue of Methods & Tools

Project Failure Prevention: 10 Principles for Project Control

Tom Gilb,

Copyright © 2005 by Tom Gilb. Published and used by Methods & Tools with permission.

Abstract: It is now well-known and well-documented that far too many projects fail totally or partially, both in engineering generally (Morris 1998) and software engineering (Neill and Laplante 2003). I think everybody has some opinions about this. I do too, and in this paper I offer some of my opinions, and I hope to lend some originality to the discussion. As an international consultant for decades, involved in a wide range of projects, and involved in saving many ‘almost failed’ projects, my basic premises in this paper are as follows:

  • We specify our requirements unclearly;
  • We do not focus enough on ensuring that the system design meets the requirements.


The principles for project control can be summarized by a set of ten principles, as follows:

P1: CRITICAL MEASURES: The critical few product objectives (performance requirements) of the project need to be stated measurably.

P2: PAY FOR RESULTS: The project team must be rewarded to the degree they achieve the critical product objectives.

P3: ARCHITECTURE FOR QUALITY: There must be a top-level architecture process that focuses on finding and specifying appropriate design strategies for enabling the critical product objectives (that is, the performance requirements’ levels) to be met on time.

P4: CLEAR SPECIFICATIONS: Project specifications should not be polluted with dozens of defects per page; there needs to be specification quality control (SQC) with an exit condition set that there should be less than one remaining major defect per page.

P5: DESIGN MUST MEET THE BUSINESS NEEDS: Design review must be based on a ‘clean’ specification, and should be focused on whether the designs meet the business needs.

P6: VALIDATE STRATEGIES EARLY: The high-risk strategies need to be validated early, or swapped with better ones.

P7: RESOURCES FOR DESIGNS: Adequate resources need to be allocated to deliver the design strategies.

P8: EARLY VALUE DELIVERY: The stakeholder value should be delivered early and continuously. Then, if you run out of resource unexpectedly, proven value should already have been delivered.

P9: AVOID UNNECESSARY DESIGN CONSTRAINTS: The requirements should not include unnecessary constraints that might impact on the delivery of performance and consequent value.

P10: VALUE BEFORE BUREAUCRACY: The project should be free to give priority to value delivery, and not be constrained by well-intended processes and standards.


P1: CRITICAL MEASURES: The critical few product objectives (performance requirements) of the project need to be stated measurably.

The major reason for project investment is always to reach certain levels of product performance. ‘Performance’ as used here, defines how good the system function is. It includes:

  • Qualities - how well the system performs;
  • Resource Savings - how cost-effective the system is compared to alternatives such as competitors or older systems;
  • Workload Capacity - how much work the system can do.

Figure 1. The ‘product’ of a project will want to attain a number of critical performance requirements. Serious project control necessitates clear agreement about the set of performance levels. The project can then focus on delivering these levels, within the available resources

In practice, you need to be concerned with the 5 to 20 ’most-critical’ product performance requirements (For example, see Figure 1). These are the critical dimensions that determine if a project has been a success or failure, in terms of the product produced by that project. I choose to make a clear distinction between the project characteristics (like team spirit and budget overrun) and the project product characteristics, and to focus here on the product characteristics as the decisive success or failure concepts. I am not concerned with ‘the operation was a success, but the patient died’ view of systems engineering.

I observe, in project after project, that I almost never see what I would call a well-written set of top-level requirements. The problems, which I perceive, include:

  • The critical product characteristics are often not clearly identified at all;
  • They are often identified only in terms of some proposed design (like ‘graceful file degradation’ to quote a recent one) to achieve requirements (rather than ‘file availability’, a requirement area);
  • They are often pitched at an inappropriately technical level (‘modularity’ rather than ‘flexibility’);
  • When they are identified they are often specified in terms of ‘nice words’ (for example, ‘state-of-the-art security’) rather than a quantified engineering specification (such as ‘99.98% reliability’);
  • Even when some quantification is given - it often lacks sufficient detail and variety to give engineering control. For instance including the short-term goals – not just the final goals, and including the different goals for the variety stakeholders – not just the implied system user. I usually see no explicit statement of the rationale for the performance levels specified.

If the critical success factors for the projects output are not well specified, then it does not matter how good any consequent process of design, quality control, or project management is. They cannot succeed in helping us meet our primary product requirements. See Figure 2 for an example of a quantitative specification of a performance requirement. This is the level of detail that I consider appropriate.

Requirement Tag: Interoperability:

Interoperability: defined as: The ability of two or more IS, or the subcomponents of such systems, to exchange information and services, and to make intelligent use of the information that has been exchanged < JSP.

Vision: The system shall make business application data visible across the boundaries of component sub-systems <- SRS 2.2.7.

Source: SRS Product ID [S.01.18, 2.2.7].

Version: October 2, 2001 11:29.

Owner: Mo Cooper.

Ambition: Radically much better Interoperability than previous systems.

Scale: Seconds from initiation of a defined [Communication] until fully successful intended intelligent [Use] is made of it, under defined field [Conditions] using defined [Mode].

Meter [Acceptance] <A realistic range of at least 100 different types of Communication and 100 Use and 10 Conditions> <- TG.

=== Benchmarks =============== Past Levels ====================

Past [UNNICOM, 2001, Communication = Email From Formation to Unit, Use = Exercise Instructions, Conditions = Communication Links at Survival]: <infinite> seconds <- M Cxx.

Conditions: defined as: Field conditions, which might threaten successful use.

Record [DoD, 1980?, Communication = Email From Formation to Unit, Use = Exercise Instructions, Conditions = Communication Links at Survival]: 5 seconds <- ??

Trend [MoD UK, IT systems in General, 2005, Mode = {Man transporting paper copy on motorbike, or any other non-electronic mode}]: 1 hour?? <- TG.

=== Targets =================== Required Future Levels ============

Goal [DoD, 2002, Communication = Email From Formation to Unit, Use = Exercise Instructions, Conditions = Communication Links at Survival]: 10 seconds?? <- ??

Figure 2. Specifying a performance requirement using Planguage. This example is a first draft from a real project

Go to page 2    Back to the archive list

Related Links

LiquidPlanner is a simple but powerful alternative to Microsoft Project

Methods & Tools
is supported by

Software Testing

The Scrum Expert