Click here to view the complete list of archived articles
This article was originally published in the Fall 2003 issue of Methods & Tools
Quantified Objectives: A Fundamental Advance in Managing Software Development Projects
Stuart James Woodward, stuart @ capricorn14.com
DoubleHelix Software & Services Limited, http://www.doublehelix.co.uk/
Synopsis
Evolutionary techniques are increasingly the default choice for executing software development projects. They have demonstrable advantages over conventional methods. Problems still arise, though, when projects are planned and monitored, and requirements are specified purely on the basis of functionality.
Tom Gilb has argued the case for quantitative objectives and use of evolutionary methods over many years.
This paper supports the use of Planguage (Ó Copyright Tom Gilb), an advanced planning language and set of methods. By augmenting Evolutionary Project Management with further Planguage methods, the author proposes that a shift of focus from functional to performance requirements will result in a fundamental advance in project management.
Introduction
A Brief Overview of Evolutionary Project Management
Evolutionary Project Management or Evo is a project management process delivering evolutionary ‘high-value-first’ progress towards the desired goals, seeking to obtain, and use, realistic, early feedback [GILB04]. The system is therefore built in steps, enabling the controlled modification of plans and schedules in line with emerging and maturing requirements. In Evo the focus is always on delivering results to the stakeholders so that at the beginning of each step any aspect of the project including its plans, requirements, architecture, etc. is subject to alteration if it is estimated that by doing so it will benefit those stakeholders.
This contrasts with the conventional Waterfall approach that is based upon a fixed plan and set of requirements and the completion of a sequence of individual processes, the entire sequence leading to a single delivery of a completed system.
A delivery sequence on an Evo project
A delivery sequence on a Waterfall project
In a previous paper, "Evolutionary Project Management" [WOODWARD99], I compared two projects that had intended to deliver the same product, one using conventional Waterfall delivery and one using Evo. The latter project, "OMAR", which I managed between 1996 and 1998, was my first major step into Evo.
Result Metrics from both projects led me to the conclusion that Evo projects deliver more functionality from fewer resources. However, I was aware that some aspects of the OMAR project could have been better even though, at that time, I did not have the knowledge or experience of the detailed methods that could have helped. Accordingly, since the end of that project I have sought to extend and improve my project management by the utilisation, where possible, of new methods.
Evo, as described in Principles of Software Engineering Management [GILB88] by Tom Gilb, had guided my management of OMAR. I have been influenced and encouraged further by Gilb’s work and have attempted to make use of Planguage, an advanced planning language and set of methods for use in Evolutionary Project Management, Requirement Specification, Design Evaluation and Quality Control. Planguage is described in the soon-to-be published Competitive Engineering [GILB04].
The CRM Project
The OMAR project had been fortunate from being shielded from ‘real-world’ users; it took place within a software house. The project was driven, and changed direction during its execution, by the known and market-researched functional requirements of a new product. Although it would serve existing business processes, OMAR itself had no existing system to replace. We had no access, in the early stages, to organisations where the system would later be used.
On a subsequent assignment I worked at an end-user financial organisation on a project named, for the purposes of this paper and confidentiality, CRM (Credit Risk Management). Unlike OMAR, CRM implemented into software the business processes that already existed in the organisation. In other words, the functional requirements of the system already existed in another form.
With this in mind I thought about the following questions:
These fundamental and powerful questions are not necessarily easy to answer. I found it difficult to find answers on the CRM project owing to the way in which the stakeholders insisted on stating the project requirements and the way in which senior management expected the project to be planned and monitored. My experience on CRM led me to understand that these areas of concern require serious attention in order to improve the management of projects. In particular, I came to the conclusion that when requirements are stated only in terms of functionality, then there is too much subjective decision making by the project’s stakeholders.
Note: In this paper I use the term functionality to mean, "what a system does" or "a system’s functions". Likewise, I use the term performance levels to mean, "how well the functions are carried out".
Problems with Functionality
Had the Requirements Been Met?
The CRM software development project required authorised funding in order to proceed. A committee of senior stakeholders agreed the budget requests. The budget requests were justified by specifying a list of high-level functions to be delivered. For example, a stated requirement was: "the system requires a Credit Approval Process". After the initial phase of the project, the success of each subsequent budget request depended on the whether the deliverables specified under the previous budget were actually made i.e. the request had to state that the project had progressed successfully towards its objective of delivering the previously specified functions.
I was never able to state simply that this had in fact occurred. The high-level functional requirements were broken down into a long list of sub-functions. Under the Credit Approval Process (CAP), for example, were thirty-seven such functional requirement statements. One of these was as follows:
CRM shall initially support the following types of request:
At the end of the project phase in which the CAP was implemented only four of the above types and two others that materialised during the design process had been implemented. When it came to the meeting to discuss the next budget request there was disagreement between the stakeholders on a number of points: on whether the CAP had been implemented satisfactorily, on which types were more important, and on whether the remainder should be deferred or had to be included in the next budget request along with a new list of high-level functions.
CAP was designed to process Credit Requests. Analysis suggested that the capacity of the software to process Credit Requests has risen from 0% to 90% of those that had required processing during the same period. So, although considerable benefit had been brought to the organisation by the enhancement of the system, there was disagreement on whether the originally stated requirement, "the system requires a Credit Approval Process", had been met because not every possible type had been implemented.
Owing to the insistence on reporting in terms of completed functions, the committee felt uneasy about reporting to senior management that CAP had been implemented. It was not even a matter of whether we had used a simple count of sub-functions or a suitable functional sizing metric; there would still have been disagreement. Yet the system did contain a usable CAP catering for 90% of the requests actually raised.
As the Project Manager, I was always under pressure to report progress in simplistic terms of completed functions rather than the enhanced system performance levels. It was therefore difficult to report progress in terms that were universally accepted and the decision by project stakeholders on whether to complete a specific area of functionality before beginning another was always subjective.
What were the Underlying Objectives?
It can be seen that the stated functional requirements were hiding the underlying objectives. For example, during requirements analysis, it became clear that another objective was "to speed up the handling of credit requests", (this is deliberately imprecise at this point). This was never actually documented but materialised as an assumption as to why we were implementing a Credit Approval Process in the software system. Stating the requirement for a Credit Approval Process did not guarantee that the project would necessarily address this underlying objective and gave no method for proving that it had done so.
Requirements or Designs?
The stated sub-functions also became confused with design ideas. For example, one was "The notification shall comprise a Credit Approval Form to be used as the critical evidence of approval and which provides the basis of information for the approval authorities to make their decisions".
Stating a requirement as such proved less than constructive because alternatives existed and were explored. The underlying requirement appeared to be along the lines of "to facilitate the approval process, the system must be able to assemble quickly all related credit information for review", (again deliberately imprecise here).
Considerable detail inevitably emerged from the function statements as part of the implementation process, which included feedback and exploration of alternatives by the users. This is normal but was impossible to predict at the time of a budget request. It therefore made it impossible to provide a reasonable estimate of the implementation work and corresponding budget required on the basis of stated requirements, which turned out to be design options that were later substituted with others.
It is important not make the common mistake of assuming that technical design is functionality. For example, ‘use a password’ is not necessarily a functional requirement; it is one contributory technical design or solution to a requirement for system security, say, which could be specified in measurable terms.
Iterative Development, Maintenance, or Scope Creep?
A related problem arose when a function, that had already been implemented, required adjustment because it did not work as the users had intended (but were unable originally to specify in functional terms). Often there was the temptation to add more detail or enhance or perfect specific functions. Of course, it was my responsibility as Project Manager to prevent the scope of the requirements from running out of control but it was often difficult to know exactly when to stop the design or implementation process and move on. The basis on when a function is considered completely implemented can be subjective in the absence of other measurements of progress.
I therefore suggest the following: When stating a functional requirement, consider whether alternatives exist. If they do then it might be a design idea to meet some underlying requirement. Try to understand and specify the underlying requirement.
Monitoring on the Basis of Functionality
Monitoring progress purely on the basis of functionality caused further difficulties. Typically, to satisfy the committee, the schedule specified the expected dates of delivery of the high-level functions that were accepted as the project’s requirements. The project’s objective was then to deliver those functions according to the schedule, which represented the time-scale of when the budget began and ended.
Problems occurred when a dependent activity did not occur, or when a dependent resource was not available, at the right time. It was invariably not possible to bring forward in the schedule tasks that were planned for later in order to replace the delayed ones and so manpower was wasted on unnecessary tasks. Monitoring purely on the basis of functionality was wasting resources. The need to completely finish specific functional areas also caused the project schedule to break down. It was impossible to anticipate the fine detail of the full functional requirements at the outset; many sub-functions were not identified until late on.
Also, new design ideas emerged that competed for the available resources. The result was that the time taken to implement each functional area was always longer than anticipated. The insistence of monitoring purely on the basis of functionality was losing the project valuable time. It therefore became impossible to meet the key (functional) requirements upon which the project was authorised. Constant rescheduling was fruitless; it became a vicious circle of subjective reassessment.
The Implication of the Problems Experienced
The basis on which project success is judged should not purely be determined by which functions are implemented; an alternative needs to be considered and the report of planning and progress to senior management must also reflect the alternative method.
A Shift of Focus to Performance Requirements
Functional Requirements
Just as on OMAR, and in common with nearly all requirements specifications and statements today, the CRM project requirements were concerned with what are called Functional Requirements.
The difference was that, in common with most cases of implementing software systems that aid business functions in commercial organisations, we were not building something totally unique or even new within the organisation. We were implementing business functions within software systems in order to improve those business functions, and the functioning, or performance, of the organisation as a whole. This alone provided me with the reason why it was not only the functionality that should have been the project driver; it was the desired performance levels of the existing functionality that should have determined the justification of the project, its planning and how it should have been monitored.
Occasionally on projects, (though not on CRM), there are specifications that make mention of what are often called Non-Functional Requirements. Some development methodologies even use this exact term although they rarely, if ever, tell you how to specify them in a way that can later be verified.
The term Non-Functional Requirements is somewhat perplexing and, frankly, I propose that we banish the term. We must seek to specify requirements in terms of ‘what they are’, not ‘what they are not’, else how can we ever test if the intended requirement has been met? Non-Functional suggests that these ‘other’ requirements are somewhat less important than or secondary to the functional requirements, perhaps because they are usually described in long, textual documents.
However, these Performance Requirements (to use a more accurate term), are rarely specified well enough, if at all, and rarely tested against when the systems are built. Most importantly, they are usually the real reasons why software development projects come into being in the first place: they are the key objectives.
Performance Requirements
Performance requirements can always be specified precisely as "desired future end states: testable and measurable" [GILB97]. It is, therefore, these performance requirements which can determine the success or otherwise of our projects.
However, they are not necessarily easy to identify. To find them, it is worth considering the primary objectives of the organisation and then how these objectives can be reached. If you can answer this question then you may be able to identify your key project objectives.
The Identification of Performance Requirements
One possible way is to think of the general characteristics of these objectives. For example, you may wish to make better X, save Y, or do more of Z. Put more formally, performance requirements might be thought of as broken down into the three following types: Quality, Resource Saving, Workload Capacity.
A useful sub-division of Performance Requirements (picture adapted from [GILB04])
This categorisation of specific requirements could depend on the perspective of the stakeholders. It is a first level decomposition, a convenient sub-division, and ultimately will not matter when the performance requirements have been fully specified. Therefore, we might consider that the gist of the underlying requirements from CRM’s Credit Approval Process might have included:
Again, these are deliberately defined in imprecise terms here (but will be specified clearly later in this paper).
The question for the CRM project then, should have been whether it could have improved on the existing, manual system by meeting these underlying requirements. If it could not have done then, in the absence of other requirements, there could have been no economic reason for having proceeded with the project; it may have been nothing more than a "nice to have".
But how could we have known this before the project proceeded and how can we know likewise for future projects?
Key Stakeholder Objectives
We must learn to focus on key stakeholder objectives. These include the system’s functionality, of course, but it is the performance requirements that are usually the key project drivers.
We need methods that:
Using Planguage to Manage Quantified Objectives
Requirements Specification
The aim of any project is to meet its requirements. To do so we need to use the resources that are available in order to meet the functional and performance requirements. This is an obvious statement but project managers often feel the pressure from senior management in organisations demanding more functions and higher performance levels from the same limited resources.
We therefore specify performance and resource requirements as well as functional requirements. The project manager can then prioritise the delivery steps based on their relative Return on Investment.
Prioritisation is evaluated by calculating the performance to cost ratios for each design idea proposed that contributes towards meeting the requirements. To do this, an estimate must be made of the extent to which each design idea will meet each requirement i.e. its impact. The performance to cost ratios can then be compared in order to ascertain which design provides the best value for money or Return on Investment.
Once the functional, performance and resource requirements are specified and the relative performance to cost ratios calculated, they can be used to plan and monitor the project. This explains the inextricable link between requirements, and project planning and monitoring.
At any point in the project, a reassessment of the requirements and the design impacts will allow the project manager to re-prioritise the delivery steps. This is a key advantage of Evolutionary Project Management.
The Specification of Functional Requirements
Functions that represent the functional requirements are either present or not present in a system. By function we mean something that is an essential part of the system. In this respect they are binary i.e. there can be no degrees to which a function has been implemented or not. For the purposes of this paper we are not interested further in the specification of functional requirements, merely that functions are to be implemented or enhanced in the system. We are specifically interested in the performance attributes of the system.
The Specification of Performance Requirements
In contrast to functions, performance levels are scalar in nature i.e. there are degrees of performance. To illustrate this in a simple form, we shall specify those identified underlying requirements of the Credit Approval Process (CAP) within the CRM system.
When considering performance levels we consider the key requirements that we wish to address and then provide an unambiguous and testable specification for each one. For CAP, we shall consider that we wish to implement a system that will improve the following:
Note that we have still not provided specifications at this point.
There may well have been other key requirements, of course, but for the sake of this discussion we shall assume that these were the only requirements of interest. Even if others were deemed more important than these then it does not matter; the principle remains the same. For each requirement we then provide a measurement specification so that we can state the system’s current level, specify its desired level and test its level in the future.
This way we can determine the degree to which the requirement has been met by any solution that is implemented as part of the system. At the very least, by appropriate tests we can publish the current state of the system’s key performance attributes.
Here are the proposed performance requirement specifications:
Credit Information Response Ambition: At least an order of magnitude reduction in the average time taken to assemble all related Credit Information in order for a Credit Request to be processed at each state in the Credit Request Life-Cycle. Scale: The average time taken in minutes to assemble all related Credit Information in order for a Credit Request to be processed at each state in the Credit Request Life-Cycle. Meter: Timings will be obtained automatically from the CRM system upon invocation of each function that assembles all related Credit Information for each Credit Request. Past [Manual System, December 2002]: 60 ß Guess based on hearsay. Goal [December 2003]: 2 ß CRM Project Proposal [May 2001]. Goal [December 2004]: 0.5 ß CRM Project Proposal [May 2001]. Credit Request Life-Cycle: Defined As: The full collection of possible states in which a Credit Request can be. Credit Information: Defined As: For example, for the Counterparty specified on the Credit Request, its associated set of limits and exposure information, its rating information, rationale for the Credit Request, similar information for related Counterparties <and other data to be specified>. |
Credit Request Cycle Ambition: To sharply reduce the average time taken to handle a Credit Request. Scale: The average time in hours it takes for a Credit Request to change state from an Initial State to a Terminal State after being processed by Trained Users. Meter: The timing data will be available from the audit trail of all Credit Requests in the CRM system. Past [Manual System, December 2002]: 60 ß Data from paper Credit Request documents [January-December 2002]. Goal [December 2003]: 48 ß CRM Project Proposal [May 2001]. Goal [December 2004]: 24 ß CRM Project Proposal [May 2001]. Initial State: Defined As: The state of "Open" of a Credit Request. Trained User: Defined As: A user of the system that has received the necessary instruction in CRM system use in order to perform his/her role correctly. |
Credit Request Capacity Ambition: A large increase in the number of Open Credit Requests that can be handled by the system simultaneously. Scale: Number of Open Credit Requests that can be handled simultaneously. Meter: The count of Open Credit Requests will be provided as a system function, and the result will be available within 60 seconds of request. Past [Manual System, December 2002]: 5 ß A count of dated paper documents. Goal [December 2003]: 100 ß CRM Project Proposal [May 2001]. Goal [December 2004]: 500 ß CRM Project Proposal [May 2001]. Open Credit Request: Defined As: A Credit Request that has been entered into the system and is not in a Terminal State. Credit Request: Defined As: A formal document requesting authority to trade with a specified counterparty up to a maximum specified monetary amount. Terminal State: Defined As: One state of {"Withdrawn", "Rejected", "Processed to Completion"} of a Credit Request. |
These specifications are written in Gilb’s Planguage. There are six essential parts to each specification as follows:
There are many other possible elements and subtleties in Planguage specifications, too, but their explanations are not needed here.
The specifications can include definitions of terms that require explanation in order to fully understand the performance requirements. Defined terms, including the performance requirements that have been specified, are underlined in this paper for emphasis.
Qualifiers in square brackets ‘[ ]’ add detail in terms of places, times, or events. An example of this from a Past parameter in the above specifications is [Manual System, December 2002].
So, for example, the above specification for Credit Request Capacity states that 5 dated paper documents from the Manual System were counted in December 2002. The requirement is for the new system to handle 100 Open Credit Requests simultaneously by December 2003 and 500 by December 2004. These unambiguous statements specify the performance requirements for Credit Request Capacity.
Note that it is legitimate for the specified performance levels to be best guesses, if no more objective information is available from any other source. The left-arrow symbol ‘ß ’ provides sources for any information stated in the specification. Sources can be useful to verify anything stated. ‘Real’ or more objective numbers might be found by testing. Targets can be set by agreement between the stakeholders on their desired requirements.
From these specifications it can be seen that to test and report the degree to which the performance requirements are met is straightforward. For example, if solution A really lowers the measurement of Credit Request Cycle from 60 to 54 then it has met the planned requirement for December 2003 by 50%; if it improves from 60 to 51 then it has met this same requirement by 75%.
By using these specifications and measurements, multiple requirements can be managed simultaneously. This will be shown below.
The Specification of Resource Requirements
Resources are always limited. We are all concerned about the limited time, money, manpower and other resources we have available to our projects. Usually these are specified at the outset of a project or each of its phases. Resources are ‘used up’ at varying rates during the course of a project. They are, therefore, scalar in nature and should be specified as such, in a way similar to performance requirements. When planning a project, we must specify how much of the resources we plan to use to in order to meet the performance requirements.
For the sake of this discussion we shall assume that the only resources of interest are Project Duration and the Manpower Cost. In general, resource specification would take into account everything that requires to be paid for in order to execute the project.
Here are the proposed resource requirement specifications:
Project Duration Gist: The step-wise development or enhancement of functions needs to be balanced over the expected calendar period of the project to prove that value is being delivered to stakeholders. No assumption is made here of how this is done so an overall expected duration is allocated. Scale: Number of days. Meter: Project records. Survival [Final Deadline]: 360 "Project must not exceed one calendar year [rounded to 30 days per month]". Budget [To meet all specified Functional and Performance Requirements]: 300 "Project is allocated ten calendar months [rounded to 30 days per month]" ß Estimated requirement from CRM Project Plan [December 2002]. |
Manpower Cost Gist: Five available personnel will be allocated for the lifetime of this project at an average cost of <X>. The cost of manpower therefore equates to the number of days worked on the project by those personnel. Scale: Number of man-days. Meter: Project records. Survival [Final Deadline]: 1000 "Project must not exceed the total number of allocated man-days [limited to 200 days per person per calendar year pro-rata]". Budget: 750 "Project allocation [limited to 200 days per person per calendar year pro-rata]" ß Estimated requirement from CRM Project Plan [December 2002]. |
The Resource Target, the resource specification’s equivalent to a performance target, is in the form of the Budget parameter. This represents a level of the resource within which the project will commit to staying. Again, there are other possible parameters but they are not needed for the purposes of this paper.
In reality, projects often overrun on time and cost (an aspect of the Software Crisis described in numerous articles). Therefore, to avoid the organisation itself having to commit to an endless project, these resource requirements each accommodate a constraint in the form of the Survival parameter. In this case it specifies the longest duration and highest cost in man-days acceptable for the project. Beyond these, and in the absence of a further commitment of resources, the project would be stopped (because there are simply no more resources available to commit).
There are no Benchmarks here but they could be stated as useful comparisons with previous projects.
From these specifications it can be seen that to test and report the degree to which the resources are used up is straightforward. For example, if solution A takes 75 days to implement then it has consumed 25% of the budgeted Project Duration. If solution A also costs 150 man-days of effort then it has consumed 20% of the budgeted Manpower Cost.
Planning and Monitoring the Project
One of the key tasks in planning a software development project is to determine the schedule or sequence of implementation. Once the project is underway, monitoring determines the degree to which the performance requirements have been met. Evolutionary Project Management then allows re-planning at any appropriate point in the project.
Planning The Sequence of Implementation
A key characteristic of Evolutionary Project Management is that it enables the project manager to re-plan at the beginning of every step in order to maximise the Return on Investment (ROI) to stakeholders. The highest ROI is determined by calculating the ratio of performance to cost for each proposed design. In terms of the requirements, this is the ratio of the increase in performance levels to the resources used in realising those performance level increases. We must seek to implement those designs with the highest ratios of performance to cost first, thus maximising the ROI at each development step. This then will determine the sequence of implementation. To do this we must sum the estimated performance level increases or Performance Impacts and separately sum the estimated resource utilisations for each design.
Owing to the fact that, in general, each Performance Requirement will be specified using different scales of measure, we cannot simply add up the estimated impacts on those scales (you cannot add apples to oranges and expect to get a useful answer!). However, by ‘normalising‘ the estimated impacts as Percentage Impacts we can then add them together. A percentage impact is expressed as a percentage of the required performance level increase. We arrive at a total percentage impact on the system for the implementation of each design. We then do the same for the resource requirements. Then we are able to calculate the performance to cost ratio for each design and determine which one is estimated to provide the highest ROI. This result is established by using a Planguage device called an Impact Estimation Table.
Impact Estimation (IE)
Proposed Design Ideas à |
Sum of Estimates |
CAP Foundation |
Upgraded Data Model |
API |
Risk Monitoring |
CAP Groups |
Counterparty Hierarchies |
PERFORMANCE REQUIREMENTS |
|
|
|
|
|
|
|
Credit Information Response 60 mins. <-> 2 mins. [2003] |
105% |
15% |
30% |
|
25% |
|
35% |
Credit Request Cycle 60 hours <-> 48 hours [2003] |
95% |
40% |
15% |
25% |
15% |
|
|
Credit Request Capacity 5 <-> 100 [2003] |
85% |
40% |
5% |
25% |
|
15% |
|
|
|
|
|
|
|
|
|
RESOURCE REQUIREMENTS |
|
|
|
|
|
|
|
Project Duration 0 <-> 300 |
86% |
12% |
10% |
8% |
16% |
10% |
30% |
Manpower Cost 0 <-> 750 |
77% |
10% |
12% |
15% |
5% |
10% |
25% |
|
|
|
|
|
|
|
|
OVERALL IMPACT |
|
|
|
|
|
|
|
Total Performance Level Increase |
285% |
95% |
50% |
50% |
40% |
15% |
35% |
Total Cost |
163% |
22% |
22% |
23% |
21% |
20% |
55% |
PERFORMANCE / COST RATIO |
4.32 |
2.27 |
2.17 |
1.90 |
0.75 |
0.64 |
In the table above the performance and resource requirements are listed vertically down the left hand side. With each is specified a Baseline<->Target pair, the chosen benchmark and performance or resource target as appropriate, against which the percentage impact figures are estimated.
Along the horizontal axis at the top are the proposed design ideas to be implemented. A percentage impact figure is entered for each design against each requirement. Empty table cells mean that there is 0% impact e.g. it is estimated that the implementation of the API will have zero impact on the Credit Information Response performance requirement. Please note that the figures in the table are for demonstration purposes only.
The bottom line lists the performance to cost ratios for each design. I have deliberately listed the designs in order of decreasing performance to cost ratio to demonstrate the point that this determines the sequence of implementation.
As with other Planguage concepts in this paper, there are many other points that are not directly relevant to the argument being made. For example, this sequence will also have other dependencies such as the availability of resources at specific times. This is not factored into the IE table whose purpose here is to give a relative assessment of ROI.
Simultaneous Management of All Requirements
It is perfectly possible that a specific design could have a negative impact on one or more requirements. For example a design might compromise one requirement in favour of others. It is the total impact on all the requirements that is important in determining the performance to cost. We must seek to manage all the key requirements otherwise it is inevitable that one or more will be disregarded or considered only subjectively during project planning. The IE table gives us a method of objectively considering them all, simultaneously.
The Sum of Estimates column in the table enables us to see the total impact of implementing all proposed designs in the project. An alternative to this is to have a cumulative column after each design itself. From the table below we can see that it is estimated that, by implementing all the designs listed, the required performance level increase for Credit Request Capacity will be met to a degree of 85%, for Credit Request Cycle to a degree of 95%, and for Credit Information Response to a degree of 105%. In this latter case it means that implementing all the designs is expected to exceed the required performance level increase for Credit Information Response, again a perfectly possible result!
Monitoring the Project
At the start of the project, the percentage impact figures in the IE table are estimates. It is not the purpose of this paper to discuss how the estimated figures are established but obviously, with every step still in the future then some method of assessment must be used.
As each step is completed, the estimates should be replaced with result metrics in order that the project manager can truthfully report the progress towards meeting the requirements. Hence, Evo insists that the project includes frequent measurements that provide feedback to the estimation process for each step. Even if you start off with guesses, you will find that you soon begin to achieve good estimates through frequent practice. This pays dividends because the estimates that feed the IE table after each step, help to improve the probability of the plan being accurate i.e. determining the next best step in terms of the highest performance to cost ratio.
Other Uses for Impact Estimation Tables
The example shown uses an IE table to determine a sequence of implementation of designs for different functions.
The method can also be used to objectively compare alternative designs, applied to the same functionality. We use the design process to get maximum value for money from our technology. We clearly separate design and consequent value delivery from system business function. Too many so-called ‘function requirements’ are in fact really technical design. In those cases, you are ‘requiring’ something that is not necessarily the best design – the best value for money. You have got to go through the discipline of estimating (later, of actually measuring that you got what you expected), what performance (quality, work capacity, savings), you expect from your design, and what costs you are expecting. This design impact estimation discipline makes you ‘look before you leap’. It makes you bring out clearly whatever you are expecting. It gives you a proper basis for submitting a design to review.
This may include objective assessments, for justifying the development of specific technical structures within the system that do not provide any immediate visual or obvious performance increase to the end users, for example techniques leading to portability, maintainability and security. Those are performance characteristics that are of interest to system stakeholders other than the end users.
You must, of course, allow for dependencies between designs, which always exist i.e. design X must be carried out before design Y can actually be done. By proceeding with the design ideas that give the highest performance to cost ratio you can be sure that you have the best chance of delivering the most cost effective performance increases as early as possible in your project. This method of project planning, or Requirements Driven Management, lends itself perfectly to evolutionary delivery of requirements.
A reassessment of each design idea’s real impacts, at the end of each delivery step, allows the project to change tack if warranted, in a controlled fashion. This is a central characteristic of Evolutionary Project Management.
Scaling Up
Planguage allows the management of multiple requirements simultaneously. The example provides a specification of only three performance requirements. Typically there are lots of key requirements depending on the scale and scope of the project. Managing multiple simultaneous objectives within a project is now possible by means of Impact Estimation, which helps determine which designs provide the most overall performance, compared to the costs of implementing those designs. It scales up, without alteration, to handle the likely numerical limit of your key requirements. Gilb suggests that this is in the range of 5-20. Keeping track of any higher number is less likely to be useful or meaningful. Once you get the hang of identifying your key performance indicators, then you will be able to naturally figure out which are the most important to your organisation or business.
Conclusions
The Use of Planguage on CRM
I have to report that the reaction I received when I presented an Impact Estimation table as part of a project plan for CRM was one of bewilderment! I believe that this was because of the specific circumstances; it was the first time anyone else on the committee had ever seen anything like it, and I attempted both to educate and produce a real plan at the same time. This may have been a mistake. Also, I had introduced it some time during the lifetime of the project rather than at or close to its beginning. In this way, it was extremely difficult to alter the collective mindset of the project’s key stakeholders. So ultimately, I failed to use it as the ‘official’ or primary method of reporting progress and specifying requirements on the CRM project.
However, because I believe the principle to be sound, I continued using the Planguage specifications and plans within the development team in order to maintain the focus of the team’s work on what we deemed to be the key project objectives. This paid off by keeping the technicians minds on the real objective – to deliver the ‘high-value-first’ designs in order to progress towards the perceived desired goals.
An interesting misunderstanding by some members of the committee became apparent when I presented the IE table. It was suggested that some of the requirements could be deemed more important than others. Two important points were being missed.
Firstly, the notion of priority itself was not clearly understood. Gilb’s definition of priority is that it is the "determination of the relative claim on limited resources" [GILB04]. I agree with this definition, not least because of the corollary that if resources were unlimited, then there would be no need to prioritise; all the requirements could be met immediately.
Secondly, if the requirement specifications themselves were to include any notion of priority relative to others i.e. before any design evaluation, then this could only ever be a subjective assessment. It would effectively be no different to deciding on a subjective basis which function to implement in favour of any other. If priority were subjective, then actually there would be no point to objective planning on the basis of specified requirements.
The IE method actually determines the relative claim on limited resources by calculating which design will give the highest Return on Investment. A further advantage of this method is that, instead of relying on arbitrarily fixed weightings, priority is dynamic and can be calculated at the beginning of each step. So, in fact, the IE table is itself the best method of prioritising (see [GILB02]).
Key Advantages of Planguage
Performance Requirements are not necessarily easy to determine but they usually represent the underlying objectives of projects. They are also a more concrete way of monitoring, in measurable terms, how a developing system is bringing increased performance to an organisation. Planguage methods allow clear and unambiguous specifications of these requirements. By doing so, project managers can be released from the straightjacket of a Gantt chart based on specific functions, the precise detail of which can never be fully known in advance.
Project managers and stakeholders must learn that there is no need necessarily to complete every sub-function within a set budget and timescale; the focus must be on delivering increasing performance for the available budget. Specifying Performance Requirements using Planguage allows us to report these increasing performance levels in provable, measurable terms.
Using Planguage to quantify objectives delivers a fundamental advance in managing software development projects.
Acknowledgements
I am indebted to Lindsey Brodie and Tom Gilb
for encouragement, criticism, and suggested changes during the production of
this paper.
I also thank Tony Ruben and Chris Dale for additional comments.
Finally, this paper is dedicated to Alice Lauren Woodward for helping me in more
ways than she may possibly imagine.
References
GILB88: Tom Gilb, "Principles of Software Engineering Management", Addison-Wesley, 1988.
GILB97: Tom Gilb, Requirements Driven Management: A Planning Language, Crosstalk, Software Technology Support Centre (STSC) Department of Defence (DoD), 1997. http://www.stsc.hill.af.mil/crosstalk/1997/06/requirements.asp
GILB02: Tom Gilb, Managing Priorities: Deadline Pressure Control, 2002. {see http://www.gilb.com}
GILB04: Tom Gilb, "Competitive Engineering", Addison-Wesley, forthcoming. {see http://www.gilb.com}
WOODWARD99: Woodward, Stuart, Evolutionary Project Management, IEEE Computer p49-57 (October 1999) {available from http://www.capricorn14.freeserve.co.uk/download.htm}