Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

 

Lean Agile Metrics for Scaled Agile Systems

Janani Rasanjali Liyanage. CSM, MBA IT, B.Sc. IT., @Rasanjali_J

As software enterprises grow from small Agile teams to programs to portfolios, sustaining the true "agility" becomes more challenging. The synergy between Agile and Lean plays a vital role on improving Agility in scaled Agile systems. This article discusses metrics to measure business agility in terms of predictability, reliability and adaptability which adhere to Lean and Agile principles.

Business Agility

Agile is often defined as building software incrementally and iteratively, while working in close contact with customer, responding to requirement changes and ensuring that the team is building right product. This can be viewed as "team agility" as its focus is mainly on the team and how the team works. But as the enterprise grows from projects to portfolios, it is imperative to not be limited to team agility but to strive for business agility. Business agility is the ability to deliver increments of business value to customers by continuously becoming predictable, reliable and adaptable. It is the ability to respond quickly as needed when market conditions change, new technologies arise, or new ideas are developed. With true business agility,

  • solutions get to market faster and development cycle times are reduced.
  • business leaders can see results quickly and exert more control over development costs. Failing projects can be cancelled early, lowering risk and potential waste.
  • changes in priority can be addressed immediately with minimal waste.
  • collaboration between business leaders and development teams lessen the misunderstandings on both sides and builds stronger relationships and overall team spirit.

Why Lean - Agile

  • Problems faced by development teams actually originate outside the team.
  • Teams have not been taught the principles of product development flow to understand the big picture.
  • Challenge of fitting in to distributed, scaled environment

One of the key reasons that organizations fail to reap the true benefits of Scrum and other Agile frameworks is that those organizations just don't have the discipline or motivation necessary to solve their problems. Regardless of the cause, the problem remains. Lean software development and Kanban are used to directly address the problems that organizations are having with Agile frameworks. Below is a discussion on how Lean and Agile are complements to each other, considering Lean and Agile principles

Eliminate Waste. Lean thinking believes that activity that does not directly add value to the finished product is waste. Agile principle also speaks about simplicity, the art of maximizing the amount of work not done is essential. Both refer to minimize or remove non value adding activities to customer. The three biggest sources of waste in software development are the addition of unrequired features, project churn and crossing organizational boundaries. To reduce waste, it is critical that development teams are allowed to self-organize and operate in a manner that reflects the work they are trying to accomplish

Build in Quality. Your process should not allow defects to occur in the first place, but when this isn't possible, it must have a mechanism to identify bugs early as possible. Inspecting after the fact, and queuing up defects to be fixed at some time in the future, isn't as effective. Agile practices which build quality into your process include test driven development (TDD) and pair programming.

Create Knowledge. Planning is useful, but learning is essential. You want to promote strategies, such as iterative development, that help teams discover what stakeholders really want and act on that knowledge. It is also important for a team to regularly reflect on what they're doing and then act to improve their approach.

Defer Commitment. It is not necessary to start software development by defining a complete specification, and in fact that appears to be a questionable strategy at best. You can support the business effectively through extendible architectures that are change tolerant and by scheduling hard to reverse decisions to the last possible moment. Frequently, deferring commitment requires the ability to closely couple end-to-end business scenarios to capabilities developed in multiple applications by multiple projects.

Deliver Quickly. It is possible to deliver high-quality systems quickly. When you limit the work of a team to its capacity, which is reflected by the team's velocity. You can establish a reliable and repeatable flow of work. An effective organization doesn't demand teams do more than they are capable, but instead asks them to self-organize and determine what they can accomplish. Constraining these teams to delivering potentially shippable solutions on a regular basis motivates them to stay focused on continuously adding value.

Respect People. The sustainable advantage is gained from engaged, thinking people. The implication is that you need a Lean governance strategy that focuses on motivating and enabling IT teams not on controlling them.

Optimize the Whole. If you want to be effective, you must look at the bigger picture. You need to understand the high-level business processes that individual projects support. You need to manage programs of interrelated systems so that you can deliver a complete product to your stakeholders.

Lean Agile Metrics to Measure Business Agility

Business Agility is the ability to deliver increments of business value to customers by continuously becoming predictable, reliable and adaptable.

Predictability

The goal of predictability is that business leaders can foresee risks quickly and have more control over development costs. Here we consider two aspects:

  • Uniformity: Delivering the same characteristics every time.
  • Consistency: Delivering the same value over time.

When measuring predictability we need to ask how consistent is the value offered and what levels of predictions are required.

Metric: Predictability Index

Predictability can be viewed as a series of variances from our committed deliverables in conjunction with escaped defects as an indicator of technical (or functional) debt. The metrics which capture those variances help to forecast predictability of future releases. In order to tell if we are on track during an Agile development process, it is possible to track different variances as displayed below

Scope variance: the number of story points delivered / story points committed.

Release velocity variance: the current velocity / average velocity.

Delivered defects. This indicator shows whether the team is sacrificing quality for speed or quantity of output.

Business value variance. If the project is capturing business value as defined by the product owner or other business stakeholder, then this metric will indicate story selection tradeoffs. This metric can also indicate if the group had to include more low value stories than expected due to technical or other valid reasons.

Averaging the variance metrics, we can create an overall index and then plot the predictability of each release as a trend. Ideally, the trend will be positive and the teams will improve their predictability over time. Downward trends are opportunities for exploration. 

CENTER>

Average

Scope variance

20%

Release velocity variance

30%

Delivered Defect Density

10%

Business value variance

25%

85%

Sprint Readiness Ratio for Product Backlog Items (PBI)

When answering the question about the level of predictions you need, it is wise to consider the Lean principle of "defer commitment". Predictions made too early will not give any value and can give wrong picture. This is particularly for decisions that are irreversible or at least will be impractical to reverse. Deferring irreversible decisions means you keep your options open as long as possible. You should defer decisions until your options expire.

By the time the decision needs to be made, there is every chance that you will know more about which of those options is the best route to take. It also gives you time to potentially explore the different options in more depth and experiment, helping to come to the right conclusion. Obviously, it is also important not to leave decisions too late. So how would you know until what point you can delay, the point your options expires. Backlog grooming is vital to answer this question. You should expect to have sprint ready backlog items at least two sprints ahead as a best practice. Then you would know when you need specific requirements, the design details of each backlog item. This metric will give a good view on status of each backlog item which in turn helps to defer commitment. As an outcome of backlog grooming exercise, every PBI should be rated.

  • Business priority
  • Requirement clarity index (requirement clarity rated from 1-5 such as not clear, clear, very clear etc.)
  • Proper estimate is present
  • Requirements sign off, Q&A close
  • Dependencies (design, requirement, resources etc.)
  • Is minimal marketable feature

Based on the ratings, it is possible to mark items as green, amber, red etc. Ideally, the product backlog items on top of the product backlog should be green, so that they can be accepted in the next sprints.

Reliability

One of the key goals of reliability is that changes in priority can be addressed immediately with minimal cost and waste without compromising quality. You can deliver value without error or delay.

Cost of Change Graph

As shown in the above graph, simple, test driven development may be costly in the beginning but in long run the cost of change is very low as opposed to rapid untested code

Metric: Technical Debt

Technical debt stems from poor decisions made on requirement, design and code. It is something you could avoid by having a better workflow, for example doing architecture properly before jumping to coding, doing TDD, better coding practices, etc. Technical debt can be created very easily when teams compromise quality to meet deadlines. Technical debt can stay invisible in your projects, creating a huge damage in long run. Technical debt can be categorized in the following types (but not limited to):

  • Process Debt - Processes become more complex and inefficient over time. Examples include builds, automated tests, check-ins, reviews, bug logging, bug tracking, etc.
  • Code Hygiene Debt - Examples includes duplicated code, dead code, inadequate documentation, monolithic classes, inconsistent style and cryptic symbol names.
  • Architectural Debt - Examples include high coupling of components, poor cohesion, poorly understood modules and systems, redundant systems, and unused systems.
  • Knowledge Debt - As products grow in complexity and team members come and go, unwritten knowledge gets lost. It takes longer for developers to become familiar with the inner workings of different systems. This can lead to cases where existing features are re-implemented.

Measuring Technical Debt

Tool-based approaches. There are several software tools that scan code to determine whether it meets coding or structural standards (extensibility is a structural standard). These tools include ERA (Virtusa internal code scan tool) Cast AIP, SonarQube, Sonarj, Structure101 and others.

Self-Reporting on outcomes of technical debt. Team level self-reporting is a fantastic mechanism for tracking intentionally accrued technical debt. The list of identified technical debt can be counted, sized or valued and prioritized. Self-reporting is very useful for capturing intentional accrued technical debt. When the project is not using a tool or the project feels certain categories of technical debts are not measured through tools, it is always a good option for team to discuss and select a set of most painful outcomes of technical debt and start tracking them as part of their retrospective meetings

Some Outcomes of Technical Debt

  • Partially implemented requirements
  • Intentional architectural deviations
  • Fixing bugs regularly causes regressions.
  • It is hard for developers predict what the effect of making a change to code will have
  • There are many duplicated systems
  • Simple changes to the software take a long time to make
  • It takes a long time before new team member can contribute to the project in a meaningful way
  • Losing a single developer devastates the team and product development plans

Adaptability

The goal of adaptability is to continually improve "value delivery" through adjusting readily to different conditions: cost, schedule, scope changes. If a project is filled with dependencies and heavy processes, and if cost of change is high, it will be difficult for team to adapt to different situations. Teams need to self-organize to identify inefficiencies and optimize their process. These must be objectively discussed in retrospectives.

Agile encourages to have T-shaped teams. While they maintain their specialized skills (programming), team members also need to have general knowledge on architecture, design and requirement analysis. This will help to remove tight dependencies on skills, rough handovers, waiting time etc. Scrum masters and project managers need to identify these knowledge gaps, discuss them in retrospectives and come up with meaningful plans.

Metric: Acceleration

Acceleration is a productivity metric that will also reflect how efficiently the team adapts to changes

Team A: 17, 18, 17, 18, 19, 20, 21, 22, 22...

Team B: 51, 49, 50, 47, 48, 45, 44, 44, 41...

Team A velocity is increasing over time whereas team B velocity is trending downwards. All things being equal, you can assume that team A productivity is increasing whereas B is decreasing. Of course, it is not wise to manage simply by the numbers. So it is important to have a meaningful conversation during your retrospective meetings.

Calculation

Acceleration of team A from iteration 1 to iteration 6 is (20-17)/17 = 0.176 .

Whereas for team B it is (45-51)/51 = -.118.

It is not always necessary to calculate the acceleration over such a long period of time, you could do it iteration by iteration. However, doing it over several iterations gives a more accurate value

Metric Sprint Responsiveness Ratio

A good metric should always create a meaningful conversation. So the retrospectives should allow those meaningful dialogs. Through retrospective at regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Teams can use a simple template to see how responsive they are in front of changes. It is very important to consider the end to end cycle rather than just one part as emphasized by the Lean principle of optimizing the whole. The intention is to question the process but not the person. Teams can consider metrics results of acceleration, predictability index, and sprint readiness ratio before rating this. Teams need to brainstorm and rate this with reasons so they can escalate to higher level. The most important factor is that the sprint responsive ratios should be evaluated at project and program level. As the team might expose reasons via this index, remediation may lie on program level where the team has no much control.

Sprint Responsiveness Ratio

Area

Requirements

Architecture

Design

Coding

Testing

IT operations

Team collaboration

Resourcing /domain knowledge

Rate (1-5)

               

Challenges in Deriving the Right Metrics

  • Difficulty in identifying the right metrics. Current metrics are typically isolated, focused on specific functions and not aligned with corporate strategy.
  • Lack of clear ownership and roles & responsibilities. Not all metrics have performance targets or assigned owners which means no accountability.
  • Lack of consistently clean data. This results in a lack of confidence and trust in the output. Having access to accurate and available data only 80 percent of time is not enough to obtain buy-in from stakeholders or drive accountability and continuous improvement. Business units and key stakeholders will not agree to align with less-than-accurate data.

To address the above root causes and ensure effective performance measurement, Lean and Agile governance should be in place. This approach honors the principles of efficiency and waste elimination while also enabling the flexibility to deal with the changing business

Lean and Agile performance measurement governance should consist of three key elements:

  • Identifying the right business metrics at different organizational levels
  • Systemized metrics tracking
  • Formal process and cadence for metrics reviews and controlling

Best Practices in Metrics Design and Tracking

The goals of measurement are to show progress against a plan (to guide re-planning) and show process effectiveness (to guide improvement). So these key goals need to be considered when designing and tracking metrics.

  • Design metrics should fit on a napkin: every metric should be simple enough that you can explain its computation on a napkin in the cafeteria.
  • Integrated data collection: as much as possible, gather data as part of other activities, not as a "metrics collection" practice.
  • Rough numbers are good enough: data being collected should be precise enough to support the analysis needed and no more.
  • Metrics that lead to decisions or change.

Avoid when design & tracking metrics

  • Non-team based metrics
  • Unbalanced measured, for example velocity ignoring the definition of done, quality measures; acceleration ignoring "technical debt"; accuracy of forecast ignoring velocity
  • Using metrics that teams themselves don't use
  • Drawing conclusions from a simple premise, for example "high velocity is good"; "high focus factor is good"; "deceleration is bad"; "increasing team size will increase velocity"
  • Using data from a tool without validating its accuracy and applicability

Encourage When Design & Tracking Metrics:

  • Measure outcome not output
  • Provides fuel for meaningful conversation
  • Follow trends not numbers

Systemized data collection, calculating metrics, analysis and publishing results are key elements of Lean and Agile performance measurement governance. Implementing a system that will create dashboards on a periodic basis to support metrics tracking is essential.

References

  1. http://www.scrumalliance.org/community/articles/2013/july/Agile-project-reporting-and-metrics
  2. http://docs.codehaus.org/display/SONAR/Technical+Debt+Calculation
  3. http://area.autodesk.com/blogs/chris/christophertechnical-debt
  4. http://www.deloittedigital.com/eu/blog/Agile-metrics-and-measurements
  5. http://blog.jda.com/Lean-and-Agile-performance-measurement-governance-part-2/
  6. http://scaledAgileframework.com/program-portfolio-management/
  7. http://www.slideshare.net/yyeret/simple-Lean-Agile-kpis
  8. https://www.ibm.com/developerworks/community/blogs/ambler/entry/Lean_development_governance?lang=en
  9. http://www.scrumalliance.org/community/articles/2012/march/running-the-scrum-of-scrums-Agile-program-manageme
  10. http://www.Agilemodeling.com/artifacts/changeCase.htm
  11. http://www.Agileconnection.com/article/two-measures-development-effectiveness-predictability-and-optimization
  12. https://www.ibm.com/developerworks/community/blogs/ambler/entry/metric_acceleration?lang=en

Agile and Scrum Resources

Scrum Expert

Agile Videos and Tutorials


Click here to view the complete list of archived articles

This article was originally published in the Fall 2014 issue of Methods & Tools

SpiraTeam Agile ALM


Software Testing Magazine

The Scrum Expert