Laws for Software Development Teams
When discussing and organizing software development teams, there are some principles, sometimes called laws, which teams need to be aware of. These laws may not change a decision you are about to make today but they should inform your about thinking and organizing your teams.
Perhaps the best known of all the laws is Brook's Law. No discussion of software teams can go very far before Brooks' Law is mentioned:
"Adding manpower to a late software project makes it later." (Brooks 1975)
Brooks' can be generalised as:
"Adding people to software development slows it down"
Countless development teams have proved Brooks Law since he first wrote about it. Indeed, Brooks Law - together with Conway's Law - form the bedrock on which much software team thinking need to be based.
When a new team member joins a software development effort they need to learn about what is going on, how the technologies are bring used, they system design and numerous other things. The team slows because existing members must take time to brief the new recruit and "bring them up to speed" - in other words, teach them how the team works and what they need to know, "knowledge transfer." This process is sometimes called "on boarding."
It is not just in the first week that new recruits need help. Some authors (e.g. Coplien and Harrison 2004) suggest it can take up to a year before new recruits are a net productivity gain. Personally I wouldn't put the figure so high but it depends on many factors. It is reasonably safe to assume that few new employees do not require some assistance during their first three months.
In fact, the team slow down may well occurs long before a new recruit is added to the teams. New recruits don't just appear. Managers must request more "resources" - perhaps they need to engage in lobbying of their own managers. Sometimes job specifications must be written, checked, issued to human resources, sent to recruitment agents, the whole process must be managed and then....
Resumes and CVs arrive. These must be read, considered, rejections issued (one hopes), candidates called in for interview, and second interview, packages negotiated and job offers made.
All before someone gets to cut a line of code. Even if a personnel or human resources department manages much of the process team leaders and members will be distracted. The time they have for actual development work will be reduced.
Brooks' Law does not imply that teams should not expand, that would be unrealistic and unsustainable. But it does mean that expanding a team is seldom a quick fix and if teams want to grow they must use some of their productivity capacity to grow their productive capacity.
In Joy, Inc. Richard Sheridan makes a bold claim his company has broken the law:
"I'm pleased to report that Brooks' Law can be broken. ... Our entire process is focused on breaking this law. Pairing, switching the pairs, automated unit testing, code stewardship, non-hero-based hiring, constant conversation, open work environment and visible artefacts all topple Brook's assertion with each." (Sheridan 2013)
Reading Sheridan's description I believe he is right. Whether all these practices are required I don't know. Maybe a team could get by without or another. However I suspect this list is actually shorter than it should be.
In the book Sheridan describes a software development environment very different from the one most developers and managers find themselves in. His company, Menlo Inc, goes to great lengths to build, share and strengthen their culture and community. Until more companies embrace this approach and there are more examples to study, it is difficult to say if Sheridan's example can be copied, I hope so.
Right now I believe each of the practices Sheridan describes is worth adopting in its own right. Combined they are even better, and if they break Brook's Law even better. But I also know that just about every company I visit, and particularly large companies, can find a reason why they cannot adopt one or more of these practices. I guess that means that these companies will be constrained by Brooks' Law.
"Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations" (Conway 1968)
Another interpretation of Conway's Law would be:
"Ask four developers to build a graphical interface together and there will be four ways of performing any action: mouse, menu, keyboard shortcut and macro."
The organization and structure of companies and teams plays a major role in determining the software architecture adopted by developers. In time, the architecture comes to impose structure on the organization that maintains and uses the software.
For example, suppose a Government decides to create a new social security system. "Obviously" this will be a major undertaking, it will require a database, some kind of interface and lots of "business logic" in-between. Obviously therefore it requires these database, interface and business logic developers, and since there are many of these folks testers and requirements specialists too.
Suddenly the roles, software and process architecture are visible. Any chance of developing a smaller system is lost. And since all these people are going to be expensive management must be added, requirements set and so on.
Come back in ten years time and the organization maintaining the system will now impose the same architecture on the organization. Reverse Conway's Law can now be observed:
Organizations which maintain systems ... are constrained to communication structures which are copies of the system.
Now there must be database specialists, business logic specialists, etc. Moving away from such an organization structure is impossible.
Conway's Law tell us that where there are organizational barriers there will be software interface barriers - layers, or APIs, or modules, or some such. This effect can be beneficial - it support modular software designs and application programming interfaces - and it can be detrimental, creating barriers which are obstacles rather than assets.
Conway's Law must be considered when designing teams, organizations and systems. Attempting to break Conway's Law - consciously or in ignorance - will generate forces that have the potential to destroy systems and organizations.
Like cutting wood along the grain it is better to consciously respect and work with Conway's Law than attempt to break it or cut across the grain. This is the key part of Xanpan and informs much of this book.
Dunbar's number: Natural breakpoints
"Extrapolating from the relationship for monkeys and apes gives a group size of about 150 - the limit on the number of social relationships that humans can have, a figured now graced with the title Dunbar's Number." (Dunbar 2010)
One frequent and reoccurring question asked about software teams is simply: "How big should a team be?" The work of anthropologist Robin Dunbar and his eponymous number, 150, provides some interesting insights when attempting to answer this question.
Dunbar presents a convincing case that 150 is the upper limit for organizational units of people. He also shows that this number reappears in military formations from Roman times onwards, in Neolithic villages, in Amish communities and in modern research groupings. Above 150 community is less cohesive, more behaviour control sets in and hierarchies are needed.
His research and analysis highlights several significant group sizings. Dunbar's Number might be better called "Dunbar's Numbers." There appear to be different groups nested inside other groups, the smaller groups are tighter, and these groups seem to nest by a factor of three.
Thus, 3 to 5 people seems to be most people's innermost group of friends, the next ring of friends is about 10 strong making taking the total to 13 to 15 people. Next 30 to 50, the typical military fighting platoon, and then 150 - the smallest independent unit in a military company and the point at which businesses start to create separate groupings.
Dunbar also suggests there is a grouping at 500 and 1,500, and that Plato suggested the ideal size for democracy was 5,300. Military unit sizes are an interesting parallel:
four or fewer soldiers
eight to 12 soldiers, several fire teams
15 to 30 soldiers, two sections
80-250 soldiers, several platoons
300 to 800 soldiers
Source: Wikipedia, English edition.
This list could continue, and of course there are variations between countries and even between different wings within one military. Broadly speaking these unit sizes follow Dunbar's findings.
Miller's Magic seven
In Agile, especially in Scrum circles, a team size of seven (plus or minus two) has become accepted wisdom. However this heuristic has is little more than that, a heuristic. I have seen little or no evidence to suggest five, six, seven, eight or nine is the best answer.
Those who state "Seven plus or minus two" often refer to George Miller's famous paper "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" (Miller 1956). However I suspect that many, if not the vast majority, of those who cite this paper have never read it.
The paper Miller considers the arguments for seven being a significant number in terms of brain processing capacity - the "chunks" of information the brain can work with. However in the end he concludes that while seven does reoccur again and again there is insufficient evidence to be conclusive:
"For the present I propose to withhold judgement. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence." (Miller 1956)
The paper might have been better titled: "The Magical Number Seven, Plus or Minus Two?"
In his closing words Miller also says "I feel that my story here must stop just as it begins to get really interesting." Indeed, Miller's paper is over 50 years old, psychologists and information theory have moved on.
Admittedly the five to nine person range should give the team a good chance of managing variability. At the lower end it would justify a tester and a requirements specialist and at the upper end could still work with only one of each. So based on my own arguments five to nine makes sense.
But I am prepared to accept larger teams, I believe there are circumstances where this is justified - which I will elaborate on later in this book.
Scrum Team sizing
So how did Miller's paper on individual information processing come to get applied to software team size? The link seems to have be some Scrum texts: The Scrum Primer states "The team in Scrum is seven plus or minus two people" (Deemer et al. 2008). "While the 2011 Scrum Guide states: "more than nine members requires too much coordination. Large Development Teams generate too much complexity" (Sutherland and Schwaber 2013).
To complicate matters the Product Owner and Scrum Master may not be included in this count. The Scrum Guide says:
"The Product Owner and Scrum Master roles are not included in this count unless they are also executing the work of the Sprint Backlog." (Sutherland and Schwaber 2013)
While the Scrum Primer implies that the Product Owner is outside the team. In short, different writers make different recommendations at different times so who are actually team members - and who is "just involved" - is unclear.
It is a little unfair to point the finger at Scrum. As already noted, teams in range of the four to eight people are seen elsewhere. Miller's paper seems to have provided an easy rationale for enshrining team sizes of seven plus or minus two. Experience also shows there is a limit, however the limit might be a little larger than Scrum suggests.
Parkinson's Law and Hofstadter's Law
"Work expands so as to fill the time available for its completion" (Parkinson's Law, Wikipedia)
"It always takes longer than you expect, even when you take into account Hofstadter's Law." (Hofstadter 1980)
I am sure that if most readers cast their minds back a few years they will recall being at school, college or university. And I am sure most readers will have at some point been set "course work" or "project work." That is work, an essay, a coding task, or some other assignment, which has to be completed by a certain date.
When I deliver training courses I usually ask the class: "Do you remember your college work? When did you do it?" I feel confident that like those on in my training classes most (honest) readers will admit to doing course work a few days before the deadline. And a few, very honest people, will admit to completing it the night before.
But very few people miss the deadline.
Once, during my master degree, I began a piece of course work very early. I "completed" it very early, but I then used the remaining time to revisit the work, again, and again, and again. To edit it. To improve it.
Psychologists who study these things show that humans are very bad at estimating how long a task will take but very good at working to deadline (e.g. Buehler, Griffin and Peetz 2010a). (My Xanpan book contains more discussion of this topic.)
When more time is available work expands, when more people are available work expands too.
During the late 1990s I worked at Reuters on a project to connect to the Liffe futures exchange. At first the deadline was very tight and it was hard to see how it could be met. In an effort to ensure the deadline a second developer was hired. But then it transpired that this deadline wasn't particularly important, a second, later, deadline was far more important.
The second deadline was easy to make, even with one developer let alone two. As a result far more software was developed to meet it. The system under development was allowed to expand to use all the time and resources available.
Software development is haunted by Parkinson's and Hofstadter's Laws. Asked to estimate how long something will take will inevitably results in too little time, but given plenty of time and work expands.
One research study (Buehler, Griffin and Peetz 2010b) observed that optimism, about how long a task will take to perform, might cause someone to start a task earlier than someone who provided a pessimistic (longer) estimate. But the total time taken by the optimist to perform the task would actually be longer the time taken by the pessimist. Deadline may well be more important than estimates in determining completion times - (see Ariely and Wertenbroch 2002)
Gall's Law - plus Parnas and Alexander
Less well known than the laws above but very important for software development is Gall's Law:
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system." (Gall 1986) via Wikipedia
Gall's Law echoes the words of David Parnas:
"As a rule, software systems do not work well until they have been used, and have failed repeatedly, in real applications."
Parnas and Gall are emphasising different aspects of the same thing. Something architect Christopher Alexander calls "organic growth." The fact that all three have identified the same axiom in different settings can only lend weight to validity.
In software development a technique called "walking skeleton" advises teams to produce a simple, basic, working piece of code which just pushes all the right (high risk) parts of a system - a skeleton which just about walks. After creating this, the team adds the flesh - layer on functionality - onto something seen to work.
This principle can be applied to the teams as well as the software:
"A complex team that works is invariably found to have evolved from a simple team that worked. A team designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple team"
Since a more complex team equates to a larger team this law starts to hint at how large teams can be created, and how Agile can be scaled, or rather, grown.
This obviously parallels Conway's Law: if a team set out to build a walking skeleton then the skeleton needs to be build by a skeleton team. To build it with a bigger, more complex team would be to build more than a minimal skeleton.
When teams start big Conway's Law implies that the architecture will be big and complex, Gall's Law tells implies that such a system will be unlikely to work and in time the team will need to start over with something smaller.
I would like to add my own two laws to this canon. Laws which I coined some years ago. Although scientifically untested I have found to be highly useful in navigating the issue of team size:
Kelly's First Law of software: Software scope will always increase in proportion to resources
Kelly's Second Law of software: Inside every large development effort there is a small one struggling to get out
The first of these laws follows from Parkinson's Law while the second seems to be a consequence of interplay between Parkinson's Law and Conway's Law. Once a project gets big the work expands, there is still a small project in there somewhere!
If a software team is bigger than absolutely necessary it will come up with more work, bigger solutions, advanced architectures, that justify the team size. It is always easier to add someone to a team than to remove him or her unwillingly.
By keeping the team small, at least initially, create the opportunity to find a small solution. Starting with a big team will guarantee a big solution.
The list is not an exhaustive discussion of "laws" around teams, I'm sure behavioural psychologists could add some more - and perhaps find fault with some that I discussed.
Individually these laws provide heuristics for organizing and managing software teams. More importantly the interplay of these laws can be quite profound.
Given Dunbar's number(s) there are limits on team size and effectiveness, considered with Conway's Law there is a potential limit on system size. The only way around this is to decompose a large system into multiple smaller systems. At first glance this run against Gall's Law but this is not necessarily so provided those systems can be sufficiently separated.
But teams are not suddenly born fully formed and effective. Conway's Law working with Gall's Law again implies they must be grown. Brook's Law implies that teams cannot be grown too fast and Parkinson's Law means that over big teams will make their own work.
Kelly's second law hints at the solution: avoid big, aim to stay small.
One may find these laws inconvenient, one may choose to attack the validity of these laws. Certainly these laws sit badly with the approach taken in many commercial environments. Rather than attack the laws and rather than seek to break the laws, I find a better approach is to accept them and work with them. Finding a way to work with these laws can be commercially uncomfortable in the short run but in the longer term is usually more successful.
This article is an excerpt from Allan's new book "Xanpan book 2: the means of production" which will be available later in 2015. You can register your interest at https://leanpub.com/xanpan2 where you can also find "Xanpan: Team Centric Agile Software Development."
Ariely, D., and K. Wertenbroch. 2002. "Procrastination, deadlines, and performance: self-control by precommitment." Psychological Science 13 (3).
Brooks, F. 1975. The mythical man month: essays on software engineering. Addison-Wesley.
Buehler, R., D. Griffin, and J. Peetz. 2010a. "The Planning Fallacy: Cognitive, Motivational, and Social Origins." Advances in Experimental Social Psychology 43: 1-62.
---. 2010b. "Finishing on time: When do predictions influence completion times?" Organizational Behavior and Human Decision Processes (111).
Conway, M. E. 1968. "How do committees invent?" Datamation (April 1968). http://www.melconway.com/research/committees.html
Coplien, J. O., and N. B. Harrison. 2004. Organizational Patterns of Agile Software Development. Upper Saddle River, NJ: Pearson Prentice Hall.
Deemer, P., G. Benefield, C. Larman, and B. Vodde. 2008. "Scrum Primer." http://www.scrumalliance.org/resources/339
Dunbar, R. 2010. How many friends does one person need?. London: Faber and Faber.
Gall, J. 1986. Systemantics: The underground text of systems lore?: how systems really work and especially how they fail. 2nd ed.. General Systemantics Press.
Hofstadter, Douglas R. 1980. Godel Escher Bach: An eternal golden braid. Harmondsworth: Penguin Books.
Miller, G. A. 1956. "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information." The Psychological Review 63: 81-97. http://www.well.com/user/smalin/miller.html
Sheridan, R. 2013. Joy, Inc. Penguin.
Sutherland, J., and K. Schwaber. 2013. "The Scrum Guide: The Definitive Guide to Scrum: The Rules of the Game." http://www.scrum.org/Scrum-Guides
Related software development articles