Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban

Responsibility Driven Design with Mock Objects

Marc Evers & Rob Westgeest, QWAN, www.qwan.it

Object oriented design is the art of assigning the right responsibilities to the right objects and creating a clear structure with loose coupling and high cohesion. Test driven development (TDD) is a design practice that helps you achieving this to some extent. TDD drives you towards loosely coupled objects, because too many dependencies will hinder the short test-code-refactor cycles typical for TDD.

Responsibility driven design is an approach that helps you shift focus from object state to interactions and responsibilities. In this article, we will show how test driven development with mock objects facilitates responsibility driven design and drives you towards a more cohesive, loosely coupled design.

Responsibilities in an object oriented system

Responsibilities are the foundation of an object oriented system. We imagine an object oriented system as a cage in which objects live. When a request is fired to the system, the objects work together to fulfill the request. Every object does what it can do best and delegates the rest to its collaborators – the other objects it works together with. Responsibility Driven Design is just about that. It focuses on what an object can do (Wirfs-Brock & McKean, 2003).

We find our objects by identifying relevant concepts in a domain. Candidates are concepts that we mention repeatedly while talking about the domain. The candidates we find are typically passive concepts from the real world without any behavior. For example, a task in a workflow system is a passive thing being executed by a real world person and an invoice is something passive being paid for by a real world client.

We find responsibilities by making passive things active. We assign responsibilities to the passive concepts that are associated with them. For example, a workflow task gets the responsibility to execute itself. An invoice gets the responsibility to pay itself. All messages that the object responds to should match its responsibilities.

We can simulate a usage scenario of the system, by letting objects respond to messages and delegate tasks that other objects can do better to those objects. Next, we look for what an object needs to know to fulfill its responsibilities. This drives us towards putting information where we directly need it.

As a result of localizing this knowledge to objects, changes to the resulting system tend to be localized as well. Changes in one place do not ripple through the design and will not affect other parts. This reduces the risk of introducing defects.

While system as a whole may be complicated, the objects within the system, combining behavior and associated state, remain small and easy to understand. As a result, making changes is therefore easy and fast.

We see that, by working responsibility driven, you achieve low coupling and high cohesion (well known software design principles). A cohesive object has responsibilities that fit well together and contains all information it needs to fulfill its responsibilities. Low cohesion means an unclear object doing many unrelated things or an object without any responsibilities, making it hard to understand and change.

Low coupling means that dependencies between objects are well managed. There are only a few dependencies between objects. Dependencies are unidirectional and exist for the purpose of delegating responsibilities, not for obtaining information from others to fulfill responsibilities.

Contrast this with the procedural approach we see a lot in practice: thinking primarily in sequences of actions and collecting whatever information required from wherever that information is lying around. In this approach, functions are highly cohesive and isolated in one place, but data is grabbed from many other objects. Functions depend on the internal details of many objects. Changes in a function may require changes in several objects. It is difficult to judge the impact of changes to a single object, because many other functions depend on it as well. Functions become sensitive to changes across the system.

The art of good design is placing responsibilities close to the knowledge required to fulfill these responsibilities. This results in small, understandable objects loosely coupled to their environment.

Responsibilities and CRC cards

A simple but powerful technique to support responsibility driven design is CRC cards (Class, Responsibility, Collaboration) invented by Kent Beck and Ward Cunningham (Beck & Cunningham, 1989).

You use index cards (15x10 cm). Every card represents a (candidate) class. Its name is written on top, its responsibilities are mentioned on the left, and the classes that it collaborates with are listed on the right. The picture below shows an example CRC card for a workflow process. It knows how to guide work through one or more tasks. It collaborates with tasks and work items to achieve this.

In a design session, you use the cards to quickly try out different scenarios and get feedback on how you have assigned responsibilities to classes.

Although this technique is 20 years old, it is still very useful, with the additional benefit that you don't need to buy an expensive modeling tool.

CRC cards are not necessarily concrete classes. They are primarily roles that the objects will play. Each role comes with its responsibilities. The idea of modeling roles and objects is extensively covered in (Reenskaug, 1995).

In code, roles will be implemented by interfaces and the objects by classes. This approach results in loosely coupled components and a lot of interface definitions.

Responsibility driven design also relates to design guidelines like the Single Responsibility Principle – each unit should have only one responsibility (Martin, 2002).

Test Driven Development

Test Driven Development (TDD) is a development practice that supports responsibility driven design. TDD drives the design from writing unit tests. While doing TDD, you keep the code and its design in good shape by refactoring continuously (Beck, 2003). Refactoring means improving the design of existing code, without changing its overall behaviour.

The steps of TDD are deceptively simple:

  1. first, write a unit test
  2. make sure the test fails – test the test
  3. add code step by step, just enough to make the test pass
  4. refactor the code – remove duplication and ensure the code reveals its intentions
  5. repeat

TDD drives you towards independent units in your design, as dependencies get in your way while writing unit tests. Therefore, it enforces a loosely coupled, cohesive design.

Many who start with TDD struggle with getting a grip on dependencies. To test an object, you exercise some behaviour and then verify whether an object is in an expected state. Because OO design focuses on behaviour, the state of an object is usually hidden (encapsulated). To be able to verify if an object behaves like expected, you sometimes need to access the internal state and introduce special methods to expose this state, like a getter method or a property that retrieves the internal state.

Apart from not wanting objects cluttering their interfaces and exposing their private parts, we neither want to introduce unnecessary dependencies with such extra getters. Our tests will become too tightly coupled and focused on implementation details.

A group of agile software development pioneers from the United Kingdom were also struggling with this back in 1999. They had to add additional getter methods to validate the state of objects. Their manager didn't like all this breaking of encapsulation and declared: I want no getters in the code! (Mackinnon et al., 2000 & Freeman et al., 2004)

The team came up with the idea to focus on interactions rather than state. They created a special object to replace the collaborators of objects under test. These special objects contained specifications for expected method calls. They called these objects mock objects, or mocks for short. The original ideas have been refined, resulting in several mock object frameworks for all common programming languages: Java (jMock, EasyMock, Mockito), .NET (NMock, RhinoMocks), Python (PythonMock, Mock.py, Ruby (Mocha, RSpec), C++ (mockpp, amop). See www.mockobjects.com for more information and links.

This approach leads to a particular style of testing called interaction based testing (Fowler, 2004). In interaction based testing, a test specifies the behavior of an object as well as the behavior it delegates to collaborators. In other words, tests specify object responsibilities and collaborations. This is exactly what responsibility driven design is about. Test driven development in an interaction based style with mocks is responsibility driven development - in code.

An example

Suppose we want to develop a simple workflow engine. We envision a workflow engine as something that manages processes consisting of a number of tasks that will be performed on a work item. A work item is for instance a report to be written or a claim to be processed. Workers (typically people) carry out the tasks.

We begin with fleshing out what happens when a process is started. We will build op our first test step by step, to show the modeling process.

In our first test, we verify that given a WorkflowProcess, when it is started, then the first task should be started. We express our example in Java and use the Mockito framework (www.mockito.org) to specify and verify our expectations about the interactions. The basic ideas and design decisions we take are generic and not tied to specific programming languages or frameworks.

We find it helpful to write test methods bottom up. Starting with what we want to assert in this test, we state the expected effect of the method under test. We expect that a task will be started on a work item:

@Test
public void startingAProcessShouldInitiateATask()
{
verify(task).start(workItem);
}

Then we express the responsibility you want to test, represented by a call to the start method of the process we are testing:

@Test
public void startingAProcessShouldInitiateATask()
{
process.start(workItem);
verify(task).start(workItem);
}

Finally, we create the object under test – a WorkflowProcess instance – and the other objects we need for the tests. The latter are the collaborators of the process: a task and a work item.

@Test
public void startingAProcessShouldInitiateATask()
{
Task task = mock(Task.class);
WorkItem workItem = mock(WorkItem.class);
WorkflowProcess process = new WorkflowProcess(task);

process.start(workItem);

verify(task).start(workItem);

}

The mock and verify methods are part of the Mockito framework.

Some other frameworks like jMock and NMock force you to set expectations before invoking the method under test. This seems radically different from setting assertions afterwards, but in our opinion, there is no fundamental difference. Especially when you take the approach we describe above, by starting with your expectations, it is basically the same approach.

Through this test, we have specified what we want. To make it work, we create interfaces for Task and WorkItem to represent these roles. We create a class for WorkflowProcess and add the required methods and constructors. We leave the methods empty for the moment, because we want to let our tests drive the implementation and we first want to see the test fail:

org.mockito.exceptions.verification.WantedButNotInvoked:
Wanted but not invoked:
task.start(
Mock for WorkItem, hashCode: 6613606
;

The error message means that we expected a call of the start method on task, but the mock object did not receive this call. The following code makes the test pass:

public class WorkflowProcess {
private final Task task;

>public WorkflowProcess(Task task) {
this.task = task;
}

public void start(WorkItem workItem) {
task.start(workItem);
}
}

Tell, Don't Ask

The test makes the expectations about the interaction between a WorkflowProcess object and its collaborator task clear. This style of testing drives you towards a Tell, Don't Ask design style: instead of pulling data from a collaborator with getters (asking) and doing something with this data, you send a collaborator a request to do something (telling), where the collaborator decides how it does the job.

This prevents train wrecks, long chains of methods calls returning objects on which new methods are called returning objects... An example of a train wreck in our workflow system, here's an implementation of process.start():

public class WorkflowProcess {

//....
public void start(WorkItem workItem) {
task.getWorker().getWorkItems().add(workItem);
}
//....
}

Instead, we explicitly capture the responsibilities using methods that tell an object to do something:

public class WorkflowProcess {
//....
public void start(WorkItem workItem) {
task.start(workItem);
}
//....
}

Train wrecks show that the design is poor and responsibilities are not in the right place. Mock objects show this as well, in an even more painful manner. If the start method of WorkflowProcess would be implemented like:

public void start(WorkItem workItem) {
firstTask.getWorker().workOn(workItem);
}

Then a unit test for the behaviour would look like this:

public class WorkflowProcessTest {
@Test
public void startingAProcessShouldInitiateATask() {
Collection<WorkItem> workItems = new
ArrayList<WorkItem>();
Worker worker = mock(Worker.class);
Task task = mock(Task.class);
WorkItem workItem = mock(WorkItem.class);
WorkflowProcess process = new WorkflowProcess(task);

when(task.getWorker()).thenReturn(worker);
when(worker.getWorkItems()).thenReturn(workItems);

process.start(workItem);

assertTrue("workItem should be added",workItems.contains(workItem));
}
}

(The when ... thenReturn expression specifies that a method on a mock object will always return the indicated value.)

What happens here is that we are writing procedural code – focusing on all the things we want to do and directly getting all required data, wherever it has to come from. We see this happening a lot in practice – starting out from the steps that need to be done without focusing on concepts first.

Without realizing, we introduce a lot of extra dependencies here. Code completion features offered by development environments make it even easier to throw in a bunch of extra dependencies with just a few keystrokes.

These dependencies might look innocent, but they are actually quite bad. Because such procedural code knows a bunch of other objects, it becomes more fragile, more sensitive to changes – changes in any of the objects it depends on will ripple through.

The test painfully shows that there is something wrong with our design. The test is complicated and loses focus. There are many mocks in the tests and one mock is returning another. What are we actually testing? Starting a process, the getWorker method, the getWorkItems method, the java Collection class add() method, or everything?

The test is easier to set up and read when we just think in terms of behavior of the object under test and what it delegates to its collaborator. The object tells its collaborator what to do rather than ask it for information. Therefore, interaction based testing with mocks drives you towards a responsibility driven design.

Roles and concepts

When developing this way, you will find new concepts along the way. For these concepts, we define roles, represented by interfaces. Using roles and interfaces helps to achieve loose coupling, because objects only depend on abstract interfaces, not on concrete implementation classes.

Let's take a look at the next test. We have identified the task as a concept. It is currently an interface. Who implements this interface? Would that be TaskImpl? Or should we have called the interface ITask and the implementing class Task? We actually don't like these kind of names. TaskImpl focuses on implementation details instead of communicating our intent. ITask is a kind of Hungarian notation to say explicitly "Hey, this is an interface". We don't care about whether or not this is an interface. We care about concepts and that the name of a concept fits its responsibilities, revealing our intent. Naming might look like a small, unimportant thing. It is however crucial in understanding and maintaining the code and its design.

It pays to think a bit longer and harder. SimpleTask is a better candidate. Although we don't want to think too much about the future (the YAGNI principle – You Aren't Gonna Need It), other more complex tasks might pop up later. Even if they don't, the task we are implementing is a singular task and therefore simple. Did we say singular? That would be an even better name: SingularTask it is.

The concept of SingularTask represents a singular focused activity that is delegated to a Worker. If a SingularTask is started, it will ask someone or something to work on the WorkItem. Do we need a Person for this? Possibly, but let's focus on the role instead and call it Worker. The corresponding test is:

@Test
public void startShouldMakeWorkerWorkOnWorkItem(){
Worker worker = mock(Worker.class);
WorkItem workItem = mock(WorkItem.class);
SingularTask task = new SingularTask(worker);

task.start(workItem);

verify(worker).workOn(workItem);
}

We specify that the worker should receive the message workOn(workItem) when the task is started.

The corresponding implementation is:

public class SingularTask implements Task {
private Worker worker;
...
public void start(WorkItem workItem) {
worker.workOn(workItem);
}
...
}

Who does what?

We leave the Worker for a moment and proceed with SingularTask. What should happen if the task has finished? The next task should be started. Where does this responsibility belong, in SingularTask, Worker, or WorkflowProcess? In practice, often, there is a tendency to put the responsibility where the situation arises, for example:

public classSingularTask implements Task {
WorkflowProcess process
...
public void done(){
Task next = process.getNextTask();
next.start(workItem);
}
...
}

This looks simple and it works. But who manages tasks in this case: WorkflowProcess or SingularTask? The problem is that they both manage tasks a bit. We assigned the WorkflowProcess the responsibility to guide work items through tasks. Starting the next task is part of this responsibility. The corresponding test is:

public classSingularTaskTest {
@Test
public void doneShouldInformTheProcess(){
Worker worker = mock(Worker.class);
TaskParent parent = mock(TaskParent.class);
SingularTask task = new SingularTask(parent, worker);

task.done();

verify(parent).done(task);
}
}

We expect that TaskParent receives the done message, not WorkflowProcess, because we focus on roles, not on concrete implementations. This is a new concept, so we create a new interface TaskParent. WorkflowProcess will play the role of TaskParent and will implement the done method. In this test, we create a mock object for the TaskParent collaborator.

The implementation then looks like this:

public classSingularTask implements Task {
private TaskParent parent;
...
public void done() {
parent.done(this);
}
}

So SingularTask does not depend on the concrete WorkflowProcess class, but on the TaskParent role instead.

The next step is to implement done in WorkflowProcess, where we expect that the next task will be started.

public class WorkflowProcessTest {
@Test
public void endingFirstTaskShouldStartNext() {
Task firstTask = mock(Task.class);
Task nextTask = mock(Task.class);
WorkItem workItem = mock(WorkItem.class);
WorkflowProcess process = new
WorkflowProcess(firstTask, nextTask);
process.start(workItem);

process.done(firstTask);

verify(nextTask).start(workItem);
}
}

The corresponding implementation is:

public void done(Task task) {
successorOf(task).start(workItem);
}

Mistakes we made learning mock objects

Unit testing is difficult. It requires skill and practice. Interaction based testing is the result of other developers' struggles and can help you to improve your design. It drives you towards more loosely coupled, more cohesive objects, with an emphasis on behavior instead of state.

Interaction based testing also requires practice, perhaps even more than 'traditional' unit tests. Mock objects are often incorrectly interpreted and used. We will discuss some of the mistakes we made ourselves while learning interaction based testing with mock objects.

Tests need some love too, especially tests with mocks

Test driven development looks simple, but in practice, it is not easy at all. It is hard to be disciplined enough to execute these small steps over and over again. It is hard not to forget to refactor when tests pass. It is particularly hard to remember taking good care of test code as well, rather than letting it descent into an unmaintainable pile of duplicate code blocks. It is hard to separate objects so that they are testable independently.

We have written our share of unit tests that are hard to maintain. We made it worse by adding multiple mocks and many expectations, to cut through those nasty dependencies. We quickly found out that we have to give our test code love and attention as well to keep them understandable. Tests that are complex and hard to understand don't help us as they should.

We learned that pair programming and lot of practice helps us to overcome some of these difficulties.

Mocking objects, rather than roles

Our first mock objects where hand made objects registering expected and actual calls. Implementing interfaces or deriving from concrete classes, did not feel like a big difference. Most mock object frameworks allow mocking concrete classes. This is not a problem per se. We prefer however to mock interfaces, because it forces us to think about the different roles collaborating objects play and what responsibilities we want to delegate.

We have stepped into the pitfall of mocking concrete classes without keeping roles in mind. As a result, we ended up with complicated tests, where different responsibilities of collaborators were all mixed up. Sometimes it even results in describing almost arbitrary interactions between objects, with a mess of messages being sent back and forth. Without clear roles, it becomes difficult to understand what the object under test is actually doing with its collaborators.

It is not so much about mocking interfaces versus mocking concrete classes. It is about using interfaces as a tool to focus on roles instead of concrete objects.

Mocking getters

Another mistake we made is setting expectations for getters. Our tests became brittle due to that mistake. If production code calls a getter, then you should not care about when and how many times the getter is called. A getter is a property that should be set once in the test (i.e. stubbed) and that should return one value independent of when and how many times it is called.

Another thing with getters is that having getters should make you sit back for a moment to think about your design. Getters break encapsulation. We frequently hear that this doesn't have to be the case: getXxy can be a property of an object, but this does not say anything about how it has been implemented. Perhaps xyz is computed from other fields. GetXyz communicates however in the design that other objects can use the xyz property. The getXyz getter is usually introduced because another object needs xyz to perform some behavior and that is exactly what encapsulation breaking is about. An important question to ask here is: is property xyz in the wrong place or have I placed the behavior that uses it in the wrong place?

Mocking has been conceived from the idea I don't want any getters in my code! Mocks are intended to verify messages sent to neighboring roles. When an object needs to call a getter on a collaborator – yes, we know, preferably not – don't mock the getters. Use a stub instead to return a specific value, which will keep the test better readable.

Mocking (almost) everything

Inspired by the idea of mocking and focusing on interactions, we started out mocking every little detail. Tests started looking just like the implementation, written in terms of expectations. It became impossible to vary implementation a bit without breaking tests.

We have learned that there is a balance to everything. Here the balance is about sufficiently capturing behaviour with expectations on the one hand, while leaving some room for the implementation to vary on the other hand. A rule of thumb is to keep expectations on the same level of abstraction. If you are for example testing an invoice class, you don't set expectations on how the invoice interacts with a currency when calculating the total. You do set expectations on how the invoice interacts with the client when the payment is due.

Mocks and objects running out of sync

We have put ourselves in situations where our expectations on mock objects did not match the actual behavior of the classes implementing these roles. We got false positives from our tests: all unit tests passed, but the system as a whole did not work.

If you use tests with mocks, your tests contain assumptions about role behavior. You have to make sure these assumptions match the actual behavior of the objects. We have learned that unit tests are useful but not sufficient. Integration tests are needed as well, to test that all the parts work together correctly.

Adding behavior to mocks

Back in the days when we crafted our own mock objects, we sometimes added behavior to them. As a result, our mocks sometimes became a kind of simulators. Our tests tested the simulators as well as production code. While simulators can be useful in system level tests, they are harmful in unit tests. They make unit tests unfocused and hard to read. It becomes difficult to trace test failures to the root cause of the problem.

As a rule of thumb, the amount of behavior in mock objects should be restricted to returning values and unconditionally throwing exceptions.

Mocking external libraries

Many developers who have been introduced to mock objects, regard them as handy tools to stub external libraries. We initially did this ourselves and even the inventors of mock objects started out this way. Mocking external libraries is a pain however. They usually don't lend themselves for mocking, because they consist of many concrete classes where the effect of mocking a few methods is unknown. Mocking external libraries makes your tests brittle. This is amplified by the fact that external libraries tend to change over time, changes beyond your control.

Moreover, when you use mocks, you specify behavior that you expect from collaborators. You should not copy expectations from existing behavior. You can expect behavior from collaborators that you control, but not from collaborators you don't control. Remember that mocks are about responsibility driven design.

To cope with external libraries, write mock based tests against a thin adapter layer around the library and write intake-unit tests for this layer which integrate with the actual library. Furthermore, write integration tests to ensure the system works together with the external library.

Summary

Object oriented design is the art of assigning the right responsibilities to the right objects. Responsibility driven design amplifies responsibilities and behavior of objects. It helps us focus on the roles objects play and the responsibilities associated with these roles. Test Driven Development with mock objects does the same thing. This leads to an interaction based style of testing and helps us to find new concepts and assign responsibilities in a Tell, Don't Ask manner - telling objects what to do rather than asking them for information. The resulting design consists of small, clear, autonomous, loosely coupled objects.

Although it is a useful tool, it still requires skill. It is easy to create unreadable, unmaintainable tests and code. You can improve your testing and design skills by practicing with other practitioners. A recent trend in practicing development skills is the coding dojo (www.codingdojo.org), analogous to dojos in martial arts. A coding dojo is a workshop where you practice together with colleagues to improve your (test driven) development skills.

References

Check out the forthcoming book Growing Object-Oriented Software, Guided by Tests by the inventors of mock objects Steven Freeman and Nat Pryce – http://www.mockobjects.com/book

Rebecca Wirfs-Brock & Alan McKean, Object Design: Roles, Responsibilities, and Collaborations, Addison-Wesley 2003

Kent Beck, Ward Cunningham, A Laboratory For Teaching Object-Oriented Thinking, 1989 - http://c2.com/doc/oopsla89/paper.html

Kent Beck, Test Driven Development: By Example, Addison-Wesley 2002

Martin Fowler, Mocks Aren't Stubs, 2004 – http://martinfowler.com/articles/mocksArentStubs.html

Tim Mackinnon, Steve Freeman, Philip Craig, Endo-Testing: Unit Testing with Mock Objects, 2000 - http://www.mockobjects.com/files/endotesting.pdf

Steve Freeman, Nat Pryce, Tim Mackinnon, Joe Walnes, Mock Roles, not Objects, 2004 - http://www.mockobjects.com/files/mockrolesnotobjects.pdf

Robert C. Martin, Agile Software Development, Principles, Patterns, and Practices, Prentice Hall 2002

Trygve Reenskaug, Working With Objects: The Ooram Software Engineering Method, Prentice Hall 1995


Click here to view the complete list of archived articles

This article was originally published in the Summer 2009 issue of Methods & Tools

Methods & Tools
is supported by


Testmatick.com

Software Testing
Magazine


The Scrum Expert