Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban

Click here to view the complete list of archived articles

This article was originally published in the Fall 2009 issue of Methods & Tools

The Spring Framework

Nico Mommaerts, Pieter Degreauwe


Before the Spring revolution, enterprise applications were generally written using the J2EE standards (specified by a group of vendors via the JCP process). The premise of J2EE was multiplatform/multivendor, if you would code against the J2EE standards you could deploy your application on any J2EE application server, on any platform. In theory at least. Running your application on an application server has several benefits, the application server offers you services, such as transaction management, messaging, mailing, a directory interface etc. Any J2EE compliant code can make use of these services, as long as the code is written against the interfaces defined in the J2EE specifications.

Unfortunately, there were a few problems with these standards. First of all, the usage of these standards was too complex. Writing a component (EJB: Enterprise Java Bean) required you to write a set of xml files (deployment descriptors), home interfaces, remote/local interfaces, etc. Even worse, 50% of the deployment descriptors were vendor specific, so 'transparently migrating an application from vendor A to B' was suddenly not so transparent anymore...

Secondly, there was the 'look-up' problem. When a component requires another component, the components itself was responsible for looking up the components it depends upon. Unfortunately, this look-up happens by name, so the name of the dependency was hardcoded in your component (code or deployment descriptor). On top of that, letting components communicate with each other from J2EE application servers of different vendors was almost always problematic.

Last but not least, in a lot of cases, some components did not need all the services the application server provided, but since there was no other API to build components, all your components became heavy weight, bloating your application.

With 'heavy weight' components we mean that they supported all features (clustering, remoting, etc.) even when we didn't need them. You could say 'but we get it for free, what's the problem?' The problem with this is that 'get it for free' means we do not need to write this functionality, but we need to configure them. And this is why most developers longed for writing plain old Java objects, without the need to write ton's of xml configuration for stuff they didn't need...

The POJO Revolution

As said before, one of the goals of these J2EE standards was to give the developer some services for free so he could focus on writing the business logic. This is still a very interesting goal, but it turned out that the programming model was not so flexible as one would like. Coding against the J2EE standards was very cumbersome, you needed to comply to several seemingly arbitrary rules, and it forced you to write code that was not so Object Oriented as one would want to. If you wanted to take advantage of the J2EE services you had to implement your classes in a very specific way, which coupled your business logic to the J2EE classes. It made your code less extensible, and not very test friendly. Since your code used specific J2EE classes and interfaces, you couldn't run that code in a unit test, since it presumes all kinds of services present, normally provided by the J2EE application server. This is what we mean with code that is not testable outside of the container (the application server is the container). Deploying a J2EE application in a container and starting it up can be a very time-consuming task, so not very ideal if you want to run your unit tests regularly.

More and more developers wanted to write just Plain Old Java Objects (we'll call them POJO's from now on), without the J2EE standards overhead. And to be honest, they were right. Writing business logic in POJO's is way easier. You don't have to implement any interfaces or extend from other classes, you are free to cleanly implement and design your domain with regular Java classes and you don't have to jump through hoops to test your code. The only problem with POJO's is that with J2EE (as it existed then) you cannot benefit from the services provided by the container, such as transaction management, remoting, etc.

This is where the Spring Framework comes in. This framework brings a lightweight container. This is a container where your POJO's live in. The difference with a heavyweight container is that the components now are as light as possible. You don't have to support services you don't need, this also means no unneeded configuration. You don't have to adhere to some programming model, you can just use POJO's and declaratively specify what services you want to use.

Due to the openness of this container and the use of AOP (Aspect Oriented Programming), it is possible to enhance POJO's in such a way that it doesn't affect the code of your POJO. So, your POJO stays as clean as possible and you can add features as you need them.

The Spring framework, what is it?

It is not a web framework, or a persistence framework, it's a framework that integrates all kinds of Java technologies/API's en makes it possible to use them with simple POJO's.

What is important to know is that Spring does not reinvent the wheel. It provides a nice and elegant way to use existing technologies (such as EJB, Hibernate, JDO, Toplink, JMS, etc). This is accomplished by several support classes and 'templates'.

Why and how is Spring more elegant? Let's look at an example.

We want to query the number of users that are older than a specified age. Using the conventional methods using the standard JDBC API, we would write something like this:

Connection conn = ..;
PreparedStatement stm;
ResultSet resultSet;
try {
stm = conn.prepareStatement("select count(0) from user u where u.age
> ?");
resultSet = stm.executeQuery();;
return resultSet.getIn(1);
} finally {

This is already simplified code, since you actually should check for NullPointerExceptions in the finally clause and you should catch SQLExceptions.

However, you can see that this is a very verbose way of writing code. Making use of Spring's JdbcTemplate gives more elegant code:

SimpleJdbcTemplate template = new SimpleJdbcTemplate(datasource); //this jdbcTemplate supports varArgs.
int userCount = template.queryForInt("select count(0) from user u where u.age > ?",age);

Of course this is a very simple example, but it should give you an idea how it makes coding Java applications much easier by removing boilerplate code from your application code.

This example illustrates the JdbcTemplate, however, Spring provides lots of templates: TransactionTemplate, HibernateTemplate, ToplinkTemplate, JDOTemplate, JMSTemplate, etc.

For each supported technology there is a module which consists of helper classes to help you implement a certain layer or aspect of your application. The core of Spring, upon which all other modules depend, is the Inversion of Control and Aspect-Oriented programming module.

It is these two programming models which are the driving force behind Spring. They act like glue, pulling your application together, but in a not-invasive way. Meaning the code you write doesn't need to have Spring references all over the place. In places where you need to use specific frameworks or standards (Hibernate, JMS, etc) Spring lets you integrate easily using these techniques.

What is Inversion of Control?

When developing an application, you always have dependencies between and on components, services, classes etc. Without Inversion of Control you would 'wire' these together on the spot where you would need the dependency. The disadvantage of this is that when you would like to use a different implementation of your dependency, you are forced to change your code. This may not seem that big a disadvantage, but what if you want to change your implementation in context of the environment you are running your code in? For example you might want to use a different authentication service during development as in production. It is not really convenient to change your code every time you need to make a production artifact, or each time you want to run your unit-tests. That's why the wiring of these dependencies is taken out of the code, and an external party manages the wiring, namely the container. Hence the name inversion of control, you let something from the outside control how your dependencies are wired together. We speak of dependency injection because the container 'injects' the necessary dependencies instead of letting the developer manage them.

For example, we have a class PrinterService, which is responsible for sending some document to a Printer. We can have different Printer implementations, laser, inkjet etc.

class PrinterService {
private Printer printer = new InkjetPrinter();
public PrinterService() {
public void printDocument(Document doc) {

This way, everybody who uses the PrinterService has no choice but to use the InkjetPrinter. Not really what we want, we want to be able to use other printers. If we write a unit test for the PrinterService, we don't want the implementation of the Printer tested, and we certainly don't want a page coming from our printer every time we run the unit test! So we want to be able to change the implementation of the Printer used in the PrinterService.

Let's try the following:

class PrinterService {
private Printer printer;
public PrinterService() {
public void setPrinter(Printer printer) {
this.printer = printer;
public void printDocument(Document doc) {

Now if someone wants to use the PrinterService he can do something like this:

PrinterService printerService = new PrinterService();
printerService.setPrinter(new LaserPrinter());

As you can see the usage of the PrinterService isn't limited anymore to the InkjetPrinter we defined in the first example. However, we still have to wire the Printer implementation to the PrinterService ourselves. If we want to run this code in a different context, with a different printer implementation, one for testing purposes for example, we still have to branch somewhere in our code to make a distinction between the different contexts/implementations.

Now this is where Spring as a container comes into the picture. We want to let Spring handle the wiring of the dependencies of our PrinterService. To do this we let the PrinterService and the Printer 'live' inside the Spring container, as such they are called beans. We give each bean a unique name, an id, so we can reference them later on. We can declare this in a Spring xml configuration file (xml schema declaration omitted for brevity):

<bean id="laserPrinter"class="com.methodsandtools.LaserPrinter"/>
<bean id="inkjetPrinter"class="com.methodsandtools.InkjetPrinter"/>
<bean id="printerService"class="com.methodsandtools.PrinterService">
<property name="printer" ref="laserPrinter"/>

In this example we defined two Printer beans, 'laserPrinter' and 'inkjetPrinter'. We used the property 'printer' on the 'printerService' to wire the 'laserPrinter' to the 'printerService' bean. What actually happens is that the 'setPrinter' method on the 'printerService' bean is called with the 'laserPrinter' bean as argument. These beans and their lifecycle are now managed by the Spring container, we could configure if we want a new instance of the laserPrinter bean be created every time we inject it somewhere, or if we only want one instance of the laserPrinter bean to live in the container and that one be used all the time. Managing this lifecycle is beyond the scope of this article, but is nevertheless a very interesting and necessary capability of the Spring container. Now, if we want to use the PrinterService, we have to ask the Spring container the bean we declared in the configuration file. The Spring container is accessed through a bean factory:

XmlBeanFactory beanFactory = new XmlBeanFactory(new ClassPathResource("configuration.xml", getClass()));
PrinterService printerService = (PrinterService) beanFactory .getBean("printerService");

We now have a PrinterService instance which we can use without worrying what Printer implementation is wired to it. In a different context we can use a different configuration file, no code needs to be changed in order to use a different Printer implementation. You can also split the declaration of the beans over multiple xml files, so you can pick and mix to your needs.

Note that the above code doesn't need to run in a J2EE container, meaning you don't need to start an application server and deploy it, it can be run as a regular Java program.

Although we declared the bean factory ourselves in this example, usually in a J2EE application there will be a bean factory associated with a context defined by the J2EE server and you won't see any Spring code in the business logic. So you are not tied to using an API or implement some interfaces as with standard J2EE to use the Inversion of Control container.

Once you start working this way, you'll soon start thinking in terms of declaring beans and wiring them together. There are more possibilities as demonstrated, like using inheritance between beans, controlling the lifecycle, autowiring dependencies based upon type, etc, but the above example should give you a basic understanding of the power of Dependency Injection in Spring.

What is Aspect-Oriented programming?

In most applications there are concerns that 'cut' across different abstraction layers, the typical example is logging. You might want to log in every method of your service layer that you are entering and exiting that specific method. This litters log statements all over your service layer, while logging is actually one concern and as such should be separated from the business logic into a different entity. This is what Aspect-Oriented Programming frameworks (we'll cal it AOP from now) aim to do. In AOP terminology a concern is written as an advice, which is an entity like a class. Then this advice can be applied to certain pointcuts in your code. A pointcut is a place or several places in your code where you want the crosscutting concern, the advice, to be applied. For example when entering or exiting a certain method. An advice together with a pointcut is called an aspect, hence the name Aspect-Oriented Programming. There are many AOP frameworks out there, Spring uses its own based upon dynamic proxies and/or CGLIB byte code generation, but can integrate with others like AspectJ if desired.

A simple example of an advice, straight from the Spring javadocs:

class TracingInterceptor implements MethodInterceptor {
Object invoke(MethodInvocation i) throws Throwable {
System.out.println("method " + i.getMethod() + " is called on " + 
i.getThis() + " with args " +i.getArguments());
Object ret = i.proceed();
System.out.println("method " + i.getMethod() + " returns" + ret);
return ret;

The above advice is invoked when a method is intercepted, which methods will be intercepted is defined in the pointcut, more on this later. The 'invoke' method receives a MethodInvocation instance which contains all the needed information about the method being intercepted. This can be seen in the System.out statement where we print all this information. The next line is also very interesting, we tell the method being intercepted to proceed, so this means that if we would like to, we could prevent the invocation of the intercepted method!

Now that we have defined what to do with the intercepted method, we have to define which methods will be intercepted. These two concepts are separated, which is a powerful principle. Saying which methods to advice is done in a pointcut, which is also an interface to be implemented, the Pointcut interface, which looks like this:

public interface Pointcut {
ClassFilter getClassFilter();
MethodMatcher getMethodMatcher();

For the most common cases there are a lot of standard implementations of this interface in the Spring framework.

One of the most (in)famous of the J2EE standards is the EJB standard, Enterprise Java Beans. Among other things, it provides declarative transaction management for your database access code. Managing your transactions in your database access code is something you have to do each time you make a call to the database, so being able to do this declaratively in an xml file was a real advantage. However, to benefit from this you had to code against an API, which sometimes meant having up to 7 files for one EJB, having to implement arbitrary methods, and your EJB would only work if deployed in a J2EE application server. This is just one example of the services that J2EE provides, but is a real pain to use because of the API you have to program against. What Spring aims to do is to give you the same services as provided by J2EE, e.g. transaction management, but in such a way that you can still code using POJO's. Spring accomplishes this by using AOP.


As said before, writing tests for EJB's were quite problematic, since your components (EJB's) had to run in a running container.

Writing your components in the POJO model solves this completely. But that is not all, Spring brings some nice features for writing integration tests. Especially useful are the transactional tests, which you can configure to rollback all inserted data, so your test database stays clean and won't interfere with other tests or the same test if it should fail. You are not required to manually clean up your test data after each test.

Let's look at an example:

@ContextConfiguration(locations = { "/applicationContext.xml","/database-test.xml" })

public class FictitiousTransactionalTest {

@Autowired //inject by type
private PetClinicService petClinicService;

@Resource(name="hibernateTemplate") //inject by name
private HibernateTemplate hibernateTemplate;

public void setUpTestDataWithinTransaction() {
// set up test data within the transaction

public void modifyDatabaseWithinTransaction() {
// logic which uses the test data and modifies database state;;

assertEquals(2, petClinicService.countVet());

public void performNonDatabaseRelatedAction() {
// logic which does not modify database state

First of all, we can define the Spring context files (the files where we wire all POJO's together), that should be loaded by the integration test. (@ContextConfiguration)

Secondly, we can inject some components into our test, so, we can start testing them. (@Resource and @Autowired)

For integration tests, it is especially useful that we can perform our tests in one transaction, and rollback that transaction after the test has run. (@Transactional, @NotTransactional, @BeforeTransaction, @AfterTransaction, @TransactionConfiguration)

Conclusion: To Spring or not To Spring?

As you might have already understood, we really think that most applications (if not all) should use a lightweight Dependency Injection container.

Inversion of Control promotes loose coupling and testability of your code, the POJO programming model is how it should have always been. (IoC really enables testabillity of the code, the POJO programming model, etc). Spring is the most popular and widely accepted DI container there is for Java. It's entirely up to you how much you use it. You can choose to just use some of the supporting classes or templates, or you can go full out and make your entire application use Spring beans. Besides the core Spring functionality which we covered here, over the years a lot of extra modules appeared which do replace existing technologies. For example Spring Web Flow is a very nice framework built upon the Spring framework for building Web applications.

With Spring it's your choice which API's you use, and how you build your application.


The Spring Framework:

Spring Web Flow:





Dependency Injection:

Related Articles

Refactoring Java Code

Responsibility Driven Design with Mock Objects

Related Resources

Java TV

Java Voices

Back to the archive list

Methods & Tools
is supported by

Software Testing

The Scrum Expert