Axel Irriger, iteratec GmbH, http://www.iteratec.de
Apache Camel is an open-source framework to exchange, route and transform data using various protocols. It is prepackaged with components for dealing with various backend systems and powerful routing and filter capabilities.
Web Site: http://camel.apache.org
Version discussed: Apache Camel 2.4.0
License & Pricing: Open Source with commercial support and packaging by FUSE
Support: User mailing list, developer mailing list, Internet Relay Chat, Fuse forums. Only the book "Camel in Action" by Manning is currently available.
The need to exchange data between different applications and environments is as old as software development. Every application is built with a specific idea in mind and has its respective data model and data format most suited for the task. The task to get data exchanged among systems is always present, when dealing with more than two or three applications. Often, third party software offers export and import interfaces, such as comma-separated value (CSV), but these are most of the time not sufficient. A different data structure, the incorporation of other information pieces, or the filtering of certain parts, are typically requirements.
In the past, this was solved either by inhouse development efforts, resulting in dedicated glue or by using special, proprietary application suites. Nowadays, this can be solved with open source frameworks, bringing systems integration to a commodity level.
This article introduces you to Apache Camel, one of those integration frameworks.
1.1 Overview of Enterprise Application Integration (EAI)
During the evolution of systems and their respective integration, methodologies and patterns for typical challenges have been documented.
When integrating two systems, you typically face several, quite common, issues:
- You must access and interact with the system. This is about the technical access to the system, by means of API, file access or database connectivity.
- You must transform incoming data into what is understood by the external system. This is about converting data from one format to another. This not only means data, but also file formats, protocols and alike.
- You may need to distribute data or process only specific sets of data. This deals with routing and filter capabilities.
These issues can be solved in an application specific way, although this limits reusability and slows down the process considerably by reinventing the wheel, over and over again. Therefore, best practices were extracted, discussed and documented.
1.1.2 Typical tasks
To solve connectivity issues, the typical solution or framework comes with a set of pre-packaged components for accessing often used resources, such as web services, file systems, databases, HTTP URLs.
Proprietary solutions typically also extend this to other vendor products, such as SAP or Siebel, but open source frameworks mostly limit this to open-standards resources.
Also, a typical EAI product or framework can be extended to access legacy systems, which are not covered by the vendor itself, by exposing some sort of API or software development kit (SDK).
Besides connecting to a system, data must often be transformed. These solutions cover standard based mapping technologies, such as Java beans and XSLT. In a proprietary environment, you often also find specialized tooling, such as graphical mapper frameworks.
1.1.3 Integration patterns
Since connectivity and transformation is only one side of the medal, routing and filter capabilities are essential, too. To cover those, most frameworks rely on implementing the enterprise integration patterns (EIP). These are a set of 65 patterns, which cover typical challenges.
If you apply a pattern for a problem of yours, you reduce the effort of implementation, as you use already proven knowledge, as you also increase the understanding of the problem domain for new team members. Besides that, you do not always reinvent the wheel once more.
Even situations not covered by the enterprise integration patterns directly can be solved, by combining them. If it can not be solved directly, it at least provides a proven baseline to extend from.
1.2 The Camel framework
Apache Camel is such an EAI framework, which is developed around enterprise integration patterns. It provides a decent list of pre-packaged components for accessing various backend systems and leverages a very extensible overall architecture. Since it is built with the Spring framework, it ensures a great deal of customization and extensibility.
Due to its openness, it can be deployed stand-alone as also as an embedded component within other applications or frameworks.
Before digging deeper into Apache Camel itself, let's first see how it is configured.
2.1 Fluent Builder
During the design of an API, you can always choose between several approaches. In good API design, you focus on usability. Apache Camelís focus is expressability. It evolved around the question of how easy users can combine the various functions of the API.
To simplify development, Apache Camel makes extensive use of the "fluent builder" pattern, which focuses on expressiveness in creation. The central principle of this pattern is the definition of an interface with all the necessary methods. If a method is invoked on an object implementing this interface, all methods return this very object instance (by means of its interface). With this, the user can easily create a chain of operations on the same object, expressing logically and semantically correct, what he wants the object to do.
Consider the example of a purse. You put 15 units of money into it, then withdraw 10 and add another 5 to it. In traditional programming, you may write:
Purse p = new Purse(); p.add(15); p.remove(10); p.add(5);
While this may be suitable, it is difficult to understand. A more fluent example would be:
Purse p = new Purse(); p.putInto(15); p.withdraw(10); p.putInto(5);
This is a lot more understandable. Though, this can expressed even shorter:
What we did here is to create a new instance of a money purse and directly work with this object to put some money into it and withdraw it. What could be done in three or four lines is put into one, making the code overall more readable and a lot easier to understand.
To configure a routing solution with Apache Camel, you have the choice of directly using Java code, which enables direct integration with any application, or by using a XBean syntax, which is the typical Spring approach.
Both approaches will be described here.
Either way, the definition consists of the same steps:
- creation of a CamelContext, which is the container for everything in Apache Camel
- definition of a data flow, which is defined as a Route. A Route always has a start point and continues with targets, to which the obtained (or created) data is sent.
Each CamelContext can hold any number of Route definitions, which enables you to logically group your data flows.
2.2.1 Spring DSL and XBean notation
To declare a route in Spring or XBean notation, all you have to do is to nest XML elements.
In Spring notation, during creation of the context, the declared beans will be created on the fly and wired together.
What you first need is a CamelContext. This is the container, which can contain any number of routes or components. Within that, you define a route.
A route has one starting point, which is described with the "from" keyword, and any number of "to" destinations, to which data is sent. Each step describes a message exchange from a source to a target destination. If, on the way, data is modified, the result will be used as input for the next exchange.
2.2.2 Java DSL
The same methodology as described for the Spring based configuration, applies for the Java DSL. Since we have some extra benefit of directly working in and with the API, the route description is briefer. Also, direct interaction with the CamelContext is a bit simplified.
In the previous section we have seen how Camel can be configured using both the Java API and the XBean syntax. Now, we'll see how that can be brought to action in order to actually get something done.
3.1 Example "File to file with XSLT"
The first sample is a simple one. We want to move a file from directory A to directory B. Because this is rather easy, we'll add a XSLT transformation to it.
Using the Java DSL, the definition is:
from("file: //inbox") (1) .to("xslt:someXSLT.xsl") (2) .to("file://outbox"); (3)
What does this do?
In line (1), a route definition starts with the from keyword. As a parameter, the URL-style string defines that the file component shall be used for this. Everything after the slashes is passed as a parameter to the file component. In this case, the component scans the "inbox" folder for files and passes them on the next processing step.
In line (2), another component is instantiated. This time, the XSLT component is used with an XSL template, loaded from the Java classpath. Everything sent to this component gets transformed using that XSLT. The output of the transformation is passed on to the next step.
In line (3), another file component is used. This actually is another instance of the same component which was already configured on line (1). Everything sent to it will be written to the "outbox" folder.
As you see, this is a very straightforward and convenient way of configuring data flow. The next sample is a little bit more complex.
3.2 Example "file to database"
This sample may be used to upload data into a database.
For this to work, you must know that the JDBC component of Apache Camel sends SQL queries to a datasource. To use that component, you need to read the file content, create an SQL query from it and send this, to the database.
Using the Spring DSL, the result is (slightly modified from the Apache Camel homepage):
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="file://inbox"/> <to uri="xslt:someXSL.xsl"/> <to uri="jdbc:testdb"/> </route> </camelContext> <!-- Just add a demo to show how to bind a date source for camel in Spring--> <bean id="testdb" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName"value="org.hsqldb.jdbcDriver"/> <property name="url"value="jdbc:hsqldb:mem:camel_jdbc"/> <property name="username"value="sa"/> <property name="password"value="" /> </bean>
Instead of using a stylesheet to perform the transformation, it would also be valid to use a Java bean, which serves the same purpose but may perform some more powerful operations.
3.3 Example "RDBMS to JMS to file"
In the third sample, we'll take a look at how we can read data from the database and dump it to a file. Since that would be very much like what we did before, we'll add a persistence layer to it, with a JMS queue in-between.
Using the Java DSL, we get this source code:
from("timer://foo?period=60000"). setBody(constant("select content from downloadTable")). to("jdbc:testDB"). to("jms:queue:order "); from("jms:queue:order").to("file://outbox");
The above sample does use the timer component. Every minute, the given SQL statement is executed and its result is sent to the JMS queue. From there, it is read and sent to the file system.
3.4 Extending Apache Camel
Apache Camel comes with a lot of components already. Although this is great, there probably comes the point where you need some functionality that is not already provided.
For that, we must extend Camel.
3.4.1 Custom processors
The first extension is to integrate some custom processing logic within the route. For that, we add a Java processor to the routing chain. You may wonder how this is possible, but in fact, everything in Apache Camel is a sort of processor.
To realize this, you simple define a Java bean which implements the Processor interface. With this, you have full control over what you receive and what is done to the data. This Java bean is then hooked in simply by defining it as a Spring bean and passing it to the bean component, provided by Camel.
3.4.2 Custom components
Besides implementing a custom processor, which is fine for specific transformations, you can also create new components. These, you can deploy alongside Camel and use them within your routes. The dynamic lookup feature of Camel enables you to use components of which you know that they are there.
For implement a component, you create a Java bean which implements the Component interface. You then create a file in META-INF/services/org/apache/camel/component. The name of the file reflects the component name, which is used in the route definition. The content of the file simply is the full qualified name of the Java class, implementing the component. Thatís it!
After you created a solution, you surely want to test it.
One typical testing approach is to simply use live components, such as reading files from a file system, or by directly injecting data into your route.
Apache Camel supports the embedded creation of a CamelContext, which can then provide you with a client, to send messages to defined endpoints with. By this, you can send information directly from your application to Camel, thus invoking routes and possibly interacting with third party systems.
The better approach to testing instead of using real components is to use traditional concepts such as aspect oriented programming (AOP) or mock objects with Camel.
4.1 Invasive testing with Mocks
To test your solution with mock objects, you can simply change the target of a route, to the mock component. This component collects information about what is sent to it and can be queried about it afterwards. With this approach you can either test how many (or if any) messages where received by the component, or test what was actually sent to it.
If you combine these two approaches, it is possible to test whether you correctly created a route and that it works as expected.
On the contrary, using a mock needs a change of the route, which is very feasible for the development process. There, you can incrementally create your routing and see, in a step-by-step way, whether everything works as defined. When you think of automatic unit testing, this is not an option, as you don't want to change anything.
4.2 Non invasive testing with Interceptors
If you need to test stuff where you don't want to change your once defined route, the interceptor pattern can be used. An interceptor allows to transparently add functionality to an application, without changing the code. For instance, if you have a web application, you can use an interceptor to add authentication to certain servlets. To be able to do that, it must be supported by the framework or application. Another option is to use AOP to add this, though this goes beyond this article.
Apache Camel supports the definition of interceptors before a route. These can then re-route the message flow to certain endpoints, so that you can non-invasively integrate stub implementations and therefore can extensively test your solution.
It is important to note that any interceptor must be defined prior to a route, as otherwise it would not be usable.
5. Apache Camel inside
As mentioned in the introduction, Apache Camel can be integrated with other components and can also run in an embedded fashion. Two popular projects using Camel are Apache ActiveMQ and Apache ServiceMix.
Apache ActiveMQ is a JMS provider. By using Apache Camel you get a one-stop-shopping solution for message oriented middleware (MOM) solutions. In these, ActiveMQ delivers powerful JMS capabilities whereas Camel takes the backend connectivity and routing capabilities to implement business logic. Using this combination, you can very easily decouple systems from each other with a solid JMS implementation in-between for load balancing (for instance).
Apache ServiceMix is an enterprise service bus, which actively employs Camel to implement the enterprise integration patterns. Since ServiceMix comes with a set of backend modules as well, there are various implementation alternatives. Typically, Camel is used for routing and filter capabilities and the backend connectivity is handled by ServiceMix. There are other options as well, depending on the style of usage you decided on.
The main distinction between Camel and ServiceMix is that ServiceMix aims for the larger scope of large-scale reuse in terms of connectivity, service-orientation and clustering. It therefore is more of an infrastructure on which solutions are deployed. Camel is such an solution, which can and is deployed on ServiceMix.
In this article, we saw how typical integration solutions can be greatly simplified by using a framework like Apache Camel. To illustrate this, several examples were given to help you get started. If you need to extend the framework beyond what is already there, you are now equipped with the basic understanding on how to create custom processor modules or even Camel components.
To complete the development cycle, we took a look at how testing is already supported by Camel, both invasively and non-invasively.
I hope you see as much value in Camel as I do, and are eager to simplify your life by testing out, how Camel can make integration easier!