Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Agile Crash Course: Agile Project Management & Delivery - Master the most important concepts & tools of Agile

Your Company Name Here - Reach 30'000 visitors/month and 35'000 software development professionals for $145.

Click here to view the complete list of archived articles

This article was originally  published in the Spring 2002  issue of Methods & Tools

Assessing Readiness for (Software) Process Improvement

Hans Sassenburg , SE-CURE AG,


From the middle of the eighties onwards there has been an increasing interest in the application of models and standards to support quality assurance in the software industry. In particular, the growing familiarity with the Capability Maturity Model has led in recent years to large scale Software Process Improvement programs. Positive results are being achieved, but the majority of the improvement programs unfortunately die a silent death. Large investments are lost and the motivation of those involved is severely tested. Case studies carried out in various European companies have revealed a number of critical factors, which determine the success or failure of (Software) Process Improvement programs. Instead of evaluating these critical success factors only once at the start of (Software) Process Improvement programs, it is recommended to assess them periodically throughout the entire lead-time of (Software) Process Improvement programs. By regularly determining where weak points exist or may be imminent and by paying attention to these weak points in time, the probability of (S)PI programs succeeding can be substantially increased. Experiences so far in applying this method may be described as encouraging.

SPI Experiences

Working as external consultants, we have acquired a great deal of experience in various European organisations in recent years. In this respect, we have encountered many badly planned implementations of SPI programs, but also some good ones. Three cases from everyday practice are discussed, each containing a description of the following points:

  • the establishment of the SPI program;
  • the most important factors for the success or failure of the SPI program;
  • the extent to which the SPI program will ultimately lead to structural improvements.

Case A: "Solving Today’s Problems First"

The first case relates to an internationally operating industrial company. The R&D department has a matrix structure within which multidisciplinary projects are carried out. Over 250 software engineers work in the software department. At the end of 1999, our firm was called in to carry out a CMM assessment. The study was commissioned by the head of the software department but it was found that most of the problems encountered were much more general in nature: unclear and unstable system specifications, no structured project approach and little attention to the quality of the process and the product. Our findings and recommendations were presented to the Board and the management team. The findings were accepted, but we were asked for proof that the recommendations made would rapidly (read: today) lead to success. The report disappeared into a filing cabinet - unused. The failure factors here were:

  • Senior management did not realise the strategic importance and the added value of software in the product. They were only surprised that software always creates problems and arrives too late.
  • The organisation deliberately avoids pursuing a long-term business strategy because people believe it is impossible to forecast how the market will develop in the coming years.
  • There is no willingness to invest in structural improvements of business processes: "formalisation will lead to bureaucracy and this hampers the necessary creativity".

The lack of success in starting an improvement program can in this case be attributed to senior management failing in its role.

Case B: "Quick Results"

The second case study concerns a branch of an internationally operating company. Over 200 software engineers work in the R&D department, allocated to projects in which software is by far the predominant discipline. A CMM assessment was carried out in 2000, after which we were asked to offer support in specific improvement areas in the form of consulting and workshops. The success and failure factors were:

  • Management appeared to be aware of the importance of software in the various products and released both capacity and money to achieve improvements in accordance with the CMM.
  • A separate group - the Software Engineering Process Group - was appointed to co-ordinate the SPI program. Unfortunately, in practice, there was only one person working full-time for the whole R&D department.
  • People were not sufficiently involved in determining the bottlenecks and in thinking about improvements. Proposals for improvements were mainly written by external consultants and after a short discussion with a number of key figures were made compulsory for the entire organisation.

Although the organisation is making substantial progress, there may be some doubt about the long-term return on the capital and effort invested in the project. The primary aim is to eliminate all the findings of the assessment as quickly as possible at the lowest possible cost in order to score within the company. The organisation is also seizing every other opportunity to distinguish itself positively in the market and within the company as a whole, so that many people are becoming snowed under with extra activities. Over-working people in this way may prove counter-productive in the long term.

Case C: "Increasing Maturity as Objective"

The third case study concerns an organisation operating independently within a larger company. Various product programs are developed in different groups. In total, there are almost 100 software engineers. Stimulated by strategic statements at the highest level in the company, the organisation started a Software Process Improvement program in co-operation with our consulting. Senior management recognises the added value of software in the various products and creates the preconditions in which structural improvements are possible. In addition to the crucial role of management, there are other success factors:

  • A steering committee, consisting of senior management, line management and SPI co-ordinators, has been appointed within which progress is discussed every quarter. The SPI co-ordinators play a facilitatory role and line management reports on progress.
  • Formal assessments on a two-yearly basis alternate with more frequent self-assessments, as a result of which the organisation is taught to make strength/weakness analyses itself and to detect and eliminate bottlenecks.
  • A number of people are released on a full-time basis to co-ordinate the SPI activities and everyone involved is regularly sent on training courses to acquire the necessary new knowledge.

The active role of management, the organisation around the SPI program, the release and training of people and the extensive provision of information are very clear factors for success. Nevertheless, the continuity of the program is in danger. As yet, the organisation has not succeeded in quantifying goals and results and bringing them into line with the overall business objectives. As a result, people get bogged down in aiming at wrong goals such as "achieving CMM level 2", so that the high investments cannot be justified.

Derived Critical Success Factors

Now, what can be learned from these case studies? It may be concluded that the success of the improvement programs is not guaranteed in any of the three situations discussed. In fact, sustainment is very doubtful. And we believe that the practical situations outlined may be said to be representative of the average situation in many other organisations. It is further evident that the critical success factors may be regarded as important in every improvement program to be started.

Conversely, failure factors must be eliminated. So what exactly are the critical factors for success? A more detail analysis of the case studies discussed, as well as other practical experiences, results in the following overview.

Role of Management

The role of management is crucial in improvement programs. An understanding of the need for improvement must be translated into a clear business strategy from which concrete goals for improvements must be derived. Management undertakes to create the necessary room for investments and plays an active role in the introduction and further implementation of the improvement program.

  • Awareness

Senior and line management are aware of the need for improvement. Strength/weakness analyses have shown what the organisation's position is in the external market as well as the efficiency of the internal management of the business.

  • Business Strategy

The organisation has formulated a clear long-term strategy indicating where it aims to be within a number of years. Concrete, quantified and measurable goals for improvement have been derived from this long-term strategy.

  • Commitment

Management plays a leading, active role in the further development of the business strategy. The necessary space for drawing up, introducing and implementing an improvement program is created so that it is clearly visible for the whole organisation. The management continues to play a leading and active role throughout the implementation.

Project Organisation

Improvement programs do not become successful simply by getting off to a good start. A project team with members drawn from all levels of the organisation must be formed to monitor progress, revise priorities if necessary and perform interim measurements (or have these performed).

  • Steering Committee

The project team, or steering committee, consists of representatives of senior management, line management and the appropriate co-ordinator(s). Where possible, representatives of other parts of the organisation are also involved as listeners or advisers.

  • Progress Reviews

Progress discussions are arranged at regular intervals - e.g. every quarter - by the co-ordinator(s). During these discussions, which are chaired by the senior management, every line manager reports on the progress achieved with respect to the plan, the expected progress in the coming period and problems and risks, which have been identified. Senior management ensures that the quantified goals are actually achieved and adopts a pro-active attitude in solving identified problems and eliminating risks.

  • Assessments

Assessments are made at various points in the improvement process. Formal assessments, carried out by external experts, are used to establish independently how far an organisation has progressed and where the priorities for improvement lie. Self-assessments are made between times so that the organisation learns to perform strength/weakness analyses itself and to identify bottlenecks and take remedial initiatives.

Resource Management

Improvement means change and will encounter resistance. A crucial role is then reserved for the co-ordinator(s) of the improvement program. The 'champions' will be released or will have to be recruited. Another way of thinking and working often results in the need to train people or to provide support by means of tools. Resistance can be removed by involving people as actively as possible in the improvement program.

  • Assignments

The knowledge, experience and skills required by co-ordinators are carefully mapped out. Based on these profiles internal staff are released and/or external consultants are recruited. They are regarded as 'champions' and facilitators who support the organisation in deciding on and implementing improvements: they are therefore not responsible for these. They win respect on the basis of their achievements in the past.

  • Training and Tools

The co-ordinators of improvement programs are confronted with resistance at all levels of the organisation. If necessary, they are trained in change management, together with any other persons involved. The changes in the organisation may possibly lead to changes in the content of the work or to work being distributed differently. Possible training requirements are identified in good time and space is created for following the necessary training courses. The extent to which certain tools can be used to support changes is also looked at.

  • Deployment

All the parties concerned at the various levels of the organisation are actively involved in the improvement program. Everyone participates as far as possible in thinking about possible initiatives, investigating bottlenecks and formulating improved working procedures. The responsibility for these activities is placed at the lowest possible level of the organisation.

Information Sharing

Improvement programs require investments on the part of the management and the motivation of everyone involved. Progress must be regularly reported to all concerned. All other information which contributes to better understanding and motivation should be communicated by means of the available resources. The successes achieved both internally and externally should be brought to everyone's attention.

  • Progress Reporting

The improvement program is launched during a specially convened meeting, attended by everyone involved. Senior management shows its commitment by explaining the need for improvement. These interactive meetings are repeated periodically to report on progress and both senior management and line management are actively present.

  • Communication

Communication media such as notice/bulletin boards and newsletters are available and are known to everyone in order to support the required exchange of information. Their use is encouraged, but care is taken to ensure that the organisation is not swamped with information. Communication is not restricted to one's own organisation: external publicity is deliberately sought.

  • Successes

Demonstrable successes, resulting directly from the improvement program and in line with the overarching business strategy, are regularly achieved. These successes are brought clearly to the organisation's attention. Wherever it appears useful, external organisations are invited to come along and tell their success stories so that the organisation itself can learn from these and people are further motivated.

Method to Periodically Assess the Critical Success Factors

The theorist might possibly conclude that every improvement program to be started should incorporate these success factors as preconditions. The pragmatist, however, will realize that this ideal image can never be achieved and will look to see how this list of success factors can be used as an instrument in successfully starting and sustaining an improvement program. A number of organisations in which we are working has been asked to evaluate this list of success factors every quarter during the progress reviews. By way of preparation for these periodic reviews, all concerned (i.e. members of the steering committee) evaluate the success factors independently of each other by assigning a score. The score is determined by evaluating three separate dimensions (based on the CMM self-assessment method as used in Motorola, see "Achieving higher SEI levels", Michael K. Daskalantonalis, Motorola, IEEE Software, July 1994):

  • Approach

Criteria are the organisation's commitment to and management's support for the success factor, as well as the organisation's ability to implement the success factor.

  • Deployment

Criteria are the breadth and consistency of success factor implementation across the organisation.

  • Results

Criteria are the breadth and consistency of positive results over time and across the organisation.

Guidelines for evaluation are given in Table 1.

The score for each success factor is the average of the three separate scores for each dimension. The scores are collected by the SPI co-ordinator and presented during the Progress Reviews. The scores are discussed and compared with the required status as determined earlier. If the score of a success factor is below this actions are defined for improving the score in the next quarter. Figure 1 gives an example of a practical situation. The Kiviat-plot presents the evaluation results for January 1996.


Evaluation Dimensions





0 - Poor

No awareness



1 - Weak



Spotty results

2 - Fair

Some commitment


Intuitive results

3 - Marginal



Measurable results

4 - Qualified

Total commitment


Positive results

5 - Outstanding



Expectations exceeded

Table 1: Score matrix

Figure 1: Example evaluation results

Preliminary Results and Experiences

Five organisations were asked to try out the method. Three organisations ultimately agreed to co-operate in doing this. In the other two organisations senior management refused to co-operate: they were not convinced of the possible benefits. The three trials started on January 1995. The provisional results may be described as encouraging:

  • Senior management initially appears to be achieving much more positive scores than line management and the people on the shop floor, which is leading to extensive discussions. The result is that everyone has a better understanding of each other's situation.
  • In all cases, bottlenecks in the current improvement processes are revealed more quickly. This has led to extra actions, a development which is felt to be both positive and motivating by everyone involved.
  • The method has proved to be easy to adapt to the specific wishes of an organisation and is more generally applicable in any randomly selected improvement processes. Success factors can be added, adjusted or omitted as required.

On the basis of the above results all the participating organisations have reported that the probability of the improvement processes ultimately succeeding has increased. This has given us confidence to start developing the method further. It is expected that an evaluation in the near future will result in a definitive set of success factors and a more finely tuned definition.

More Software Process Improvement Knowledge

Don't Write Another Process

Process Improvement - Is it a Lottery?

Does Process Improvement Really Pay Off?

Process improvement, The Agile Way!

Click here to view the complete list of archived articles

Methods & Tools
is supported by

Simpliv IT Courses

Software Testing

The Scrum Expert