9/18/12

A command line interface for jclouds

Prologue

I've been using and contributing to jclouds for over a year now.  So far I've used it extensively in many areas and especially in the Fuse Ecosystem. In all its awesomeness it was lacking one thing, a tool which you can use to manage any cloud provider that jclouds provides access too. Something like the EC2 command like tool, but with the jclouds coolness. A common tool through which you would be able to manage EC2, Rackspace, Opesntack, CloudStack ... you name it.

I am really glad that now there is such a tool and its first release is at the corner.

So this post is an introduction to the new jclouds cli, which comes in two flavors:

  1. Interactive mode (shell)
  2. Non interactive mode (cli)


A little bit of history

Being a Karaf committer, one of the first things I did around jclouds is to work on its OSGi support. The second thing was to work on jclouds integration for Apache Karaf. So I worked on a project that made it really easy to install jclouds on top of Karaf and added the first basic commands around blob stores and the Jclouds Karaf project started to take shape. At the same time a friend and colleague of mine, Guillaume Nodet had started a similar work, which he contributed to Jclouds Karaf. This project now support most of jclouds operations, provides rich completion support which makes it really fast and easy to use.

Of course, this integration project is mostly targeting people that are familiar with OSGi and Apache Karaf and cannot be considered a general purpose tool, like the one I was dreaming about in the prologue.

A couple of months ago, Andrew Bayer started considering building a general purpose jclouds cli. Then, it strike me: "why don't we reuse the work that has been done on Jclouds Karaf to build a general purpose cli?"

One of the great things about Apache Karaf is that it is easily branded and due to its modular foundation, you can pretty easily add/remove bits in order to create your own distribution. On top of that it allows you discover and use commands outside OSGi.

So it seemed like a great idea to create a tailor made Karaf distribution, with jclouds integration "out of the box", that would be usable by anyone without having to know anything about Karaf, both as an interactive shell and as a cli. And here it is: Jclouds CLI.

Getting started with Jclouds CLI

You can either build the cli from source, or download a tar ball. Once you extract it, you'll find a structure like:


The bin folder contains two scripts:

  1. jclouds-cli: Starts up the interactive shell.
  2. jcouds: Script through which you invoke jclouds operations.

The zip distribution provides the equivalent bat files for windows.

Let's start with the jclouds script. The script takes two parameters, multiple options and arguments. The general usage is:

./jclouds [category] [action] [options] [arguments]

Category: The type of command to use. For example node, group, image, hardware, image etc.
Action: The action to perform on the category. For example: list, create, destroy, run-script, info etc.

All operations wether the are compute service or blobstore operations, will require a provider or api and valid credentials for that provider/api. All of these can be specified as option to the command. For example to list all running nodes on Amazon EC2:

For apis you also need to specify the endpoint, for example the same operation for Cloudstack can be:

Of course, you might don't want to specify the same options again and again. In this case you can just specify them as environmental variables. The variable name are always in capital characters and prefixed with JCLOUDS_COMPUTE_ or JCLOUDS_BLOBSTORE_ for compute service and blobstore operations respectively. So the --provider option would match JCLOUDS_COMPUTE_PROVIDER for compute service or JCLOUDS_BLOBSTORE_PROVIDER for blob stores.

The picture below shows a sample usage of the cli inside an environment setup for accessing EC2. The commands create 3 nodes on EC2 and then destroy all of them.



The environmental variables configured are:

JCLOUDS_COMPUTE_PROVIDER aws-ec2
JCLOUDS_COMPUTE_IDENITY ????
JCLOUDS_COMPUTE_CREDENTIAL ???


When using the jclouds script all providers supported by jclouds will be available by default. You can add custom providers and apis, by placing the custom jars under the system folder (preferably using a maven like directory structure).

Using the interactive shell

The second flavor of the jclouds cli is the interactive shell. The interactive shell works in a similar manner, but it also provides the additional features:


  • Service Reusability
    • Services are created once
    • Commands can reuse services resulting in faster execution times.
  • Code completion
    • Completion of commands
    • Completion of argument values and options
  • Modularity
    • Allows you to install just the things you need.
  • Extensible
    • You can add commands of your own.
    • You can additional project. 
      • Example: As of Whirr 0.8.0 you can install it to any Karaf based environment. So you can add it to the cli too.


In the example above we created a reusable service for EC2 and then we performed a node list, that displayed the nodes that we created and destroyed in the previous example.


Using the interactive shell with multiple providers or apis

The interactive shell will allow you to register compute service for multiple providers and apis or even multiple service for the same provider or api using different configuration parameters, accounts etc.




The image above displays how you can create multiple services for the same provider, with different configuration parameters. It also show how to specify which service to use in each case. Note again that in this example the identity and provider was not passed but were provided as environmental variables.

Modular nature of the interactive mode

As mentioned above the interactive shell is also modular, allowing you to add / remove modules at runtime. A module can be support for a provider or api, but it can be any type of extension you may need.


To see the list of available providers and api that can be used in the interactive mode, you can use the features:list and features:install commands. In the example below we list the features and grep for "openstack" and then install the jclouds openstack-nova api. Then we create a service for it and list the nodes in our openstack.



Configuring the command output

Initially the command output was designed and formatted, using the most common cloud providers as a guide. However, the output was not optimal for all providers (different widths etc). Moreover, different users needed different things to be displayed.

To solve that problem the cli uses a table-like output for the commands, with auto-adjustable column sizes to best fit the output of the command. Also the output of the commands is fully configurable.

Each table instance is feed the display data as a collection which represents table rows. The column headers are read from a configuration file. The actual value for each cell is calculated using JSR-233 script expressions (by default it uses groovy), which are applied for each row and column. Finally the table supports sorting by column.

A sample configuration for the hardware list command can be something like:



With this configuration the image list command will produce the following output:



We can modify the configuration above and add an additional column, that will display the volumes that are assigned to the current hardware profile. In order to do so we need to have a brief idea of how the jclouds hardware object looks like:



So in order to get all the volume size and the type of the volume we could use the following expression on the hardware object: hardware.volumes.collect{it.size + "GB " + it.type}.



The updated configuration would then look like:

The new configuration would produce an the following output on EC2:


You can find the project at github: https://github.com/jclouds/jclouds-cli. Or you can directly download the tarballs at: http://repo1.maven.org/maven2/org/jclouds/cli/jclouds-cli/1.5.0/

7/8/12

Apache Karaf meets Apache HBase

Introduction

Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable. If you are a regular reader most probably you already know what Apache Karaf is, but for those who are not: Apache Karaf is an OSGi runtime that runs on top of any OSGi framework and provides you a set of services, a powerful provisioning concept, an extensible shell & more.

Since Apache HBase is not OSGi ready (yet), people that are developing OSGi applications often have a hard time understanding how to use HBase inside OSGi.

This post explains how you can build an OSGi application that uses HBase. Please note, that this post is not about running parts of HBase inside OSGi, but focuses on how to use the client api inside OSGi.

As always I'll be focusing on Karaf based containers, like Apache ServiceMix, Fuse ESB etc, but most of the things inside this post are generally applicable to all OSGi runtimes.

HBase and OSGi

Let's have a closer look at HBase and explain some things about its relation with OSGi.

Bad news

  • HBase provides no OSGi metadata, which means that you either need to wrap HBase yourself or find a 3rd party bundle for HBase.
  • HBase comes in as a single jar.
  • Uses Hadoop configuration.
The first point is pretty straightforward.

The second point might not seem as bad news with a first glance, but if you give it some thought you will realize that when everything is inside a single jar things are not quite modular. For example the client api is inside the same jar, with the avro & thrift interfaces and even if you don't need them, they will still be there. So that jar contains stuff that may be totally useless for your use case.

Please note, that the single jar statement does not refer to dependencies like Hadoop or Zookeeper.

The fact that is HBase depends on the Hadoop configuration loading mechanisms, is also bad news, because some versions of Hadoop are a bit itchy when running inside OSGi.

Good news

  • There are no class loading monsters inside HBase, so you won't be really bitten when you are trying to use the client api inside OSGi.
The challenges

So there are two types of challenges, the first is to find or create a bundle for HBase that will have requirements that make sense to your use case. The second is to load the hbase client configuration inside OSGi.

Finding a bundle for HBase

As far as I know, there are bundles for HBase provided by the Apache ServiceMix Bundles. However, the bundles that are currently provided, have more requirements in terms of required packages than they are actually needed (see bad news, second point). Providing a bundle with more sensible requirements is currently a work in progress, and hopefull will be released pretty soon.

In this port I am going to make use of the Pax Url Wrap Protocol. The wrap protocol will create on the fly OSGi metadata for any jar. Moreover, all package imports will be marked as optional, so you won't have to deal with unnecessary requirements. This is something that can get you started, but its not recommended for use in a production environment. So you can use it in a P.O.C. but when its time to move to production, it might be a better idea to use a proper bundle.


Creating a Karaf feature descriptor for HBase

After experimenting a bit, I found that I could use HBase inside Karaf, by installing the bundles listed in the feature descriptor below:


In fact this feature descriptor is almost identical to the feature descriptor provided by the latest release of Apache Camel. One difference is the version of Apache Hadoop used. I preferred to use in this example a slightly lower version of Apache Hadoop, which seems to behave a bit better inside OSGi.


Creating HBase client configuration inside OSGi

The things described in this section may vary, depending on the version of the Hadoop jar, that you are using. I'll try to provide a general solution that covers all cases.

Usually, when configuring the hbase client, you'll just need to keep an hbase-site.xml inside your classpath. Inside OSGi this is not always enough. Some version of hadoop will manage to pick up this file, some others will not. In many cases hbase will complain that there is a version mismatch between the current version and the one found inside hbase-defatult.xml.

A workaround is to set the hbase.defaults.for.version to match your HBase version: 

An approach that will save you in most cases, is to use set the hbase bundle classloader as the thread context class loader before creating the configuration object.

Thread.currentThread().setContextClassLoader(HBaseConfiguration.class.getClassLoader());

The reason I am proposing this, is that hbase will make use of the thread context classloader, in order to load resources (hbase-default.xml and hbase-site.xml). Setting the TCCL will allow you to load the defaults and override them later.

The snippet below shows how you can set the TCCL in order to load the defaults directly from the hbase bundle.

Note, that when following this approach you will not need to include the hbase-site.xml inside your bundle. You will need to set the configuration programmatically. 

Also note, In some cases HBase internal classes will recreate the configuration and this might cause you issues, if HBase can't find the right classloader.

Thoughts

HBase is no different than almost any library that doesn't provide out of the box support for OSGi. If you understand the basics of class loading, you can get it to work. Of course understanding class loaders is something that sooner or later will be of use, no matter if you are using OSGi or not.

The next couple of weeks, I intend to take HBase for a ride on the back of the camel, using the brand new camel-hbase component inside OSGi, so stay tuned.


Edit: The original post has been edited, as it contain a snippet, which I found out that its best to be avoided (sharing the HBase configuration as an OSGi service). 





6/28/12

Red Hat acquires FuseSource: Shaping the Future

On 27 June 2012, Red Hat announced the acquisition of FuseSource. This move will definitely shape the future of OSS integration.

Of course, this is not "yet another blog post" about spreading the news, it's more like a "personal thoughts about the acquisition".

The past

When I started studying computer science back in the 90s, I needed a brand new PC to start working on. Before I even got that new PC, I made sure that I have my hands on the operating system of my choice:

My first Linux 

The last thing I could imagine back then, is that the company with the cool looking hat would acquire the company I am working at. And how could I? I would at least need a time machine in order to do so.

A time machine from Back to the Future

The present

Ironically enough, the time machine in the picture above (from the famous trilogy back to the future) was set, to travel to 27 June 2012 (the day the acquisition was announced). Which is just awesome, because it allows me to use it in this post and add a funny tone.

"Back to the Future" control panel
So here we are today. The acquisition is announced, FuseSource is joining Red Hat's Middleware division unit.

The future

So what happens next? Well, when you bring together teams of indisputable talent you open up so many possibilities, that is making the future really hard to predict.

Yoda: "Impossible to see, the future is"

One thing is for sure. The future will definitely be really exciting and this move will definitely shape the future of OSS integration.

Thoughts

I am really happy the FuseSource has found a new home. I am more excited the FuseSource new home is an open source company. In fact, the FuseSource development and business model is the one introduced by Red Hat, so this is a sign that both companies have a common understanding on how to do open source.

FuseSource provided me with the best job I've ever had, mostly due to the fact that I had the luck to work with highly skilled people. The possibility of working in a wider environment built on top of the same principals (high skill & extra coolness) can only be thrilling.




6/13/12

OSGification: The good, the bad & the purist

Prologue
If the title doesn't make it obvious this post is all about OSGification. It's an attempt to present and categorize practices in the following categories:
  • Good practices
  • Bad practices
  • Pure practices
Some consider consider pure solutions to be the best. I fully agree that they are the best, when building a project from scratch. When migrating existing projects to OSGi you sometimes need to make sacrifices and follow a not so pure path. Especially when we are talking about libraries or frameworks that can also live outside OSGi, too.

Purpose
The main reason I am writing this is that I often encounter people that consider "pure practices" panacea and will not consider adopting a less pure solution, even if this means not adopting a solution at all. So this is an attempt to present the pros and cons of each approach, so that you can make your own conclusions.

This post assumes that you have basic understanding of OSGi.

How do I make a project OSGi compliant?
In this section I will give a really brief overview of the OSGification process. Keep in mind that its "really brief".

To make your project OSGi compliant you need to take the following steps
  1. Provide a proper MANIFEST with proper package imports/exports etc.
  2. Resolve class loading issues.
  3. Make sure that all runtime dependencies are OSGi compliant.  
In order to provide a correct MANIFEST for your bundle, you need to identify which are the runtime requirements of your bundle. Gather all the packages that your bundle references at runtime and add them to the Import-Package section of your bundle. For each of the packages one or more versions of the package might satisfy the needs of your bundle so you can use version ranges. For example:

Import-Package: org.slf4j;version=[1.4,2)

Tools that can aid you in this process are bnd, the maven-bundle-plugin etc. Those tools will do a great job in the process of aiding you in identifying those packages for you. But is this enough? Well, not always. Some times a package can be used without being referenced directly in the source (by using the classloaders etc). This is something that you have to deal on your own (will get to that later)

Once you are through in creating the imports, you also need to specify the exports of your bundle. This is easy. All you need to do is to specify all the packages of your bundle that you want to be accessed/imported by other bundles. Also specifying a version for those packages is important, because this will allow you to make use of the version oriented OSGi features (e.g. multiple versions, version ranges etc)

This will be enough for installing your bundle inside an OSGi container, but it will not always guarantee the runtime behavior of your bundle. As there might be class loading issues you need to resolve.

Common Class Loading issues

Class.forName() [1] This generally needs to be avoided inside OSGi. In most cases it will fail resulting in a ClassNotFoundException. You can read more about it in Neil Bartlett blog about OSGi readiness. You can work around this problem by specifying the class loaders that can load the class (replace it with classLoader.loadClass())

The problem is to know which is the class loader to use. If your bundle somehow imports (lot of ways to achieve this) the package of the target class, then the you can use the bundle class loader. If not things get a bit more complicated. 

Allowing your classes to be loaded from other bundles [2]
The same problem that your bundle will have in order to load classes, will have other bundles loading your classes. Unfortunately there is no global solution for this and its cases can be treated differently. It depends on the bundle, the framework and the sacrifices you are willing to make.

Singletons & statics in OSGi  [3] A lot of people do not actually realize that a singleton pattern guarantees a single instance per ClassLoader. In OSGi this means that each bundle will get its own instance of the singleton. There is nothing wrong with the pattern, it has to do with the way that java handles static variables (single instance per ClassLoader). So when using statics, make sure the don't directly cross bundle boundaries.

Java Service Loader [4] Java allows you to load object that are defined under META-INF/services/yet.another.interface. This works great when using flat class loaders but will not work that great when your class loaders are isolated. 

Bad approaches
This section discusses about approaches that are generally considered as bad. I'd say don't be afraid to use one of them just because they are generally considered bad, as long as you know the side effects and you are ready to live with them. Maybe the term last resort would be more appropriate to describe them.


Creating Uber Jars


Sometimes people in order to avoid the overhead of bundling all their runtime dependencies and also avoid class loading issues between their bundles and their dependencies they are bundling everything together under a single bundle. This results in having everything loaded from a single class loader which will eliminate challenges [1], [2] & [3] which were mentioned above.

This comes at the cost that you will not be able to add, remove or update a part of your application, since everything is part of the same artifact. I would avoid that approach, but its still better than nothing.

Fragments & Dynamic Imports


A fragment can attach itself on one existing bundle. Once the bundle gets attached both bundles can share classes and resources. This sounds cool, but there are two things that you might want to consider. The first one is that a bundle can be attached to one and only one bundle (can have a single host) and that may be limiting for your needs. The second problem is that in order to attach a bundle to a "host" bundle, you will need to refresh the host bundle. That will cause a chain reaction refreshing all bundles that depend from the host. The refreshing action will restart the activator of the bundle that is being refreshed and that is not always nice.

 Dynamic Imports is an approach that is used for dealing with class loading issues [1] and it actually allows you to specify imports with wildchards. This usally serves the need of loading classes from packages that are not known in advance. However, this can have lot of side effects, such as unwanted wirings between bundles that can affect the process of adding removing or updating bundles.

I would use fragments if I had no other means of solving my problem, but I would avoid dynamic imports at all.

Good approaches
This section discusses about approaches that are generally applied, but they are not really pure. This means that even though they do work without serious side effects there are not the best thing to do. However, in many cases they are a realistic approach that will get you to the "OSGi ready" ready state.


Thread Context ClassLoader


I tried to explain above why the Class.forName is something that will probably not work inside OSGi. But there are a lot of libraries out there that heavily rely on it. On top of that there is a good chance that you are using it too inside your application. A potential solution is to "fallback" to the thread context class loader (a.k.a TCCL).  This approach is based on the fact that a library may not be able to possibly know how to load class using its name, but the caller of the library might do. So the caller may set the thread context class loader and the library may use that to load classes.

Imagine a library that deserializes data into Java Objects. That library will try to load the class most probably using Class.forName and will fail. If you modified the library to "fallback" to the TCCL if it failed to load the class, you could have the code that uses that library to set the value of TCCL just before calling the library and then restore it to its original value after the invocation. Of course this assumes that the class loader you are going to use as TCCL is able to load the class. In many cases that is valid for a bundle that uses a library and this is why it usually works.

Some real world examples:


The fact that this approach assumes that you will be able to set the TCCL right before the invocation and also that the caller bundle will be able to load the target class does not guarantee that the approach will work in all cases. In most cases it does, yet there still might be cases were it will fail.

This is also the reason why many consider this approach "a bad practice". In practice I see more and more libraries and frameworks using this approach and it seems that it works with no side effects. I think that the key here is to know when to use it and when to go by a more pure approach.


Pure approaches


Object Factories and Resource Loaders


With this approach you avoid direct loading of the class or the resource and instead you delegate to a Factory or Loader. For application that is intended to run both in and out of OSGi you can have a default implementation that will assume flat class loaders, but inside OSGi you can have an implementation that makes use of OSGi services in order to load classes, create objects or load resources.

Passing the Class Loader


Structuring your API in such a way that whenever it comes to loading classes, to allow the passing the class loader. Although this has the least possible side effects, its not always feasible.

Use a BundleListener as a complement to the ServiceLoader

As I already mentioned above the Java ServiceLoader will not work that well inside OSGi. An approach that you can follow to work around this problem, is to use a bundle listener that will listen for bundle events and for each bundle that gets installed, it will look for META-INF/services/yet.another.interface. It can then use the bundle class loader to load and instantiate the implementation of the Service. Finally it can either register it to the OSGi service registry or to a local registry from which the bundle can look the service up.

Please note, that I am not sure if this is commonly used practice (haven't seen it documented), but it worked for me quite well in the past, without side effects so I decided to add it in this section. Feel free to drop a comment if you think the opposite.

Also note, that there is also Apache Aries - SPI Fly which intends to provide a global solution to the bridging the java service loader with the OSGi service registry. Finally, I've read some things about the OSGi 5 and some work related to the java service loader, but I don't know more than that yet.

Final Thoughts


I'll repeat myself by saying that the initial motivation for this blog post was the fact that every now and then I encounter people that are "pure or nothing". I think that its better to go by a not sure pure approach, that do nothing. Especially for existing projects, the road to OSGification can be cross in many steps and not single one.

The last year I've been spending time on Jclouds working on the OSGi support. The initial approach was to just get it running in OSGi. In every single release of jclouds improvements are applied and with the help and feedback of the community, some questionable approaches have been replaced with more solid ones. I feel that this attitude should be an example for projects that consider providing OSGi support. "No need to go for all or nothing. Step by step is also an option!"













3/18/12

Fuse Fabric and Camel

Prologue


I have already blogged about CamelOne which is going to take place on May 15 - 16 at Boston this year. I am really excited about it, as a lot of new things are going to be talked about there.

So I'd like to share my excitement with you and give you a small preview of "Using Fuse Fabric with Apache Camel".

Fuse Fabric

Fuse Fabric is a distributed configuration, management and provisioning system for Apache Karaf based containers such as Apache ServiceMix and Fuse ESB.


Fabric provides a distributed configuration registry and also tools for:

  • Installing and managing containers to your private or public cloud.
  • Deployment agent for configuring and provisioning distributed containers.
  • Discovery of Camel endpoints & message brokers.
  • A lot more ... 

Fuse Fabric and Camel preview

What I am going to show you, is a video that demonstrates how fabric makes it easy to:
  • Use a single host for installing containers to your local network.
  • Deploy and configure applications to distributed containers.
  • Discover & use message brokers in your camel routes.

I hope you enjoyed it! See you at CamelOne !

3/9/12

How to deal with common problems in pax-exam-karaf

Prologue
Back in January I made a post about advanced integration testing with pax-exam-karaf. That post was pretty successful, as I got tons of questions from people that started using it.

In this post I am going to write down these questions and provide some answers that I hope that will help you have a more smooth experience with integration testing in Karaf.

Where should I place my integration test module?
Quite often people want to test a single bundle. In this case having the integration tests hosted inside the bundle itself seems a reasonable choice.

Unfortunately it is not. Pax Exam in general will start a new container and will install your tests as a proper OSGi bundle. That means that it expects to find bundle, however the test phase runs before the install phase in maven. That means that when you run the tests the bundle that you are testing will not be installed yet. To avoid this issue, you better host all your integration tests as a separate module.

The bundle context is not getting injected in my test?
For injecting the bundle context in the test Pax Exam Karaf makes use of javax.inject.Inject annotation. In your classpath there will be also a org.ops4j.pax.exam.Inject annotation. Make sure that you use the javax.inject.Inject to inject the bundle context. That's very easy to get you confused so be careful.

My system properties are not visible from within the test?
Quite often people customize the behavior of their tests, using system properties. It's really common to configure maven-surefire-plugin to expose maven properties as system properties (e.g. passing credentials to a test, allocating a free port for test & more).

That's something really useful, but you have to always remember a small detail. "The test is bootstrapped by one jvm, but runs in an other". That means that specifying system properties in the surefire plugin configuration, will not automagically set these properties to the Karaf container that will be used as the host of your tests.

I usually make sure to pass the desired system properties in the target Karaf container, by adding them in the etc/system.properties file of the container:
The above snippet also shows, how you can configure Karaf config.properties. The example shows how you can add additional execution environments in your test configuration.

How can I have my test extend a class from an other module?
You should never forget that under the hood  Pax Exam will create a bundle called probe, that contains all your tests. That means that if your tests extend a class that is not present in the current module, it won't be found by your probe bundle. In older versions of  Pax Exam you could pretty easily modify the contests of the probe and include additional classes from any module visible to your project. In the 2.x series its more complicated.

The easiest way to go is to make sure your install the bundle that contains the super class to the target container. Here is an example:

How can I configure the jre.properties of my test?
That's something really common, when you want to for example to test CXF in plain Karaf and not using ServiceMix/FuseESB etc.Pax Exam Karaf allows you to replace any configuration file with the one of your choice.


That's it! I hope you found this useful! Happy testing!

2/28/12

CamelOne 2012

Now the the Oscar nomination ceremony is over, the next big event is CamelOne. So save the date, 15-18 May at Boston.


Learn about Camel
Professionals using open source integration and messaging, have a great opportunity to learn more about Camel. Besides the presentations and the opportunity to meet & talk with fellow Camel riders, there are also going to be training sessions for ServiceMix & ActiveMQ with Camel.

Speak about Camel
If you are already using it and interested in speaking about your journeys on the back of the camel, you can send your speaking proposals at camelone@fusesource.com.


The camel is waiting for you ...




Save the date: 15-18 May at Boston. See you there !!!

1/24/12

Advanced integration testing with Pax Exam Karaf

Prologue
From my first steps in programming, I realized the need of integration testing.

I remember setting up a macro recorder in order to automate the tests of my first swing applications. The result was a disaster, but I think the idea was in the right direction even though the mean was wrong.

Since then I experimented with a lot of tools for unit testing, which I think that are awesome, but I always felt that they were not enough for testing my "actual product".

The last couple of years I have been working with integration projects and also with OSGi and there the challenge of testing the "actual product" is bigger. The scenarios that need to be tested sometimes involve more than one VMs and of course testing OSGi is always a challenge.

In this post I am going to blog about Pax Exam Karaf, which is an integration testing framework for Apache Karaf and also the answer to all my prayers. Kudos to Andreas Pieber for creating it.

Tools for integration tests in OSGi
I have been using Pax Exam for a while now for integration tests in a lot of project I have been working on (e.g. Apache Camel). Even though its a great tool it does not provide the "feel" that the tests run inside Apache Karaf but rather inside the underlying OSGi container.

A tool which is not a testing framework itself, but is frequently used for OSGi testing is PojoSR. It actually allows you to use the OSGi service layer without using the module layer (if you prefer its an OSGi service registry that run without an OSGi container). So its sometimes used for testing OSGi services etc. A great example is the work of Guillaume Nodet (colleague of mine at  FuseSource and yes I know that he doesn't need introduction) for testing Apache Camel routes that use the OSGi blueprint based on PojoSR. You can read more about it in Guillaume's post. Very powerful but yet it is not inside Karaf (it tests the routes in blueprint, but it doesn't test OSGi stuff).

Arqullian is an other effort that among others allow you to run OSGi integration tests but I haven't used that myself.


Pax Exam Karaf
Each of the tools mentioned in the previous section are great and they all have their uses. However, none of the above gives me the "feel" that the tests do run inside Apache Karaf.

Pax Exam Karaf is a Pax Exam based framework which allows you to run your tests inside Apache Karaf. To be more accurate it allows you to run your tests inside any karaf based container. So it can also be Apache ServiceMix, FuseESB or even a custom runtime that you have created yourself on top of Apache Karaf.

As of Karaf 3.0.0 (it's not yet released) it will be shipped as part of Karaf tooling. Till then you can enhoy it at the paxexam-karaf github repository.


The benefits it offers over traditional approaches is that it allows you to use all Karaf goodness inside your tests:
  • Features Concept
  • Deployers
  • Admin Service
  • Dynamic configuration
  • more
Setting it up
Detailed installation instructions can be found at the paxexam-karaf wiki.

The basic idea is that your unit test is added inside a bundle called probe and this bundle is deployed inside the container. The container can be customized, with custom configuration, custom features installation etc so that it fits the "actual" environment.

Starting a custom container
To setup the container of your choiece you can just modify the configuration method. Here is an example that uses Apache ServiceMix as a target container:


Using the Karaf shell
Since one of Karaf's awsome features is its shell, being able to use shell commands inside an integration test is really important.


The first thing that needs to be addressed in order to use the shell inside the probe, is that the probe will import all the required packages. The probe by default uses a dynamic import on all packages, however this will exclude all packages exported with provisional status. To understand what this means you can read more about the provisional osgi api policy. In our case we need to customize our probe and allow it to import such packages this can be done by adding the following method to our test:


Now in order to execute commands you need to get the Command processor from the OSGi service registry and use it to create a session. Below is a method that allows to do in order to execute multiple commands under the same session (this is useful when using stateful commands like config). Moreover this method allows you to set a timeout on the command execution.



Distributed integration tests in Karaf
Working full time on open source integration, my needs often spawn across the boundaries of single JVM. A simplistic example is having one machine sending http requests to an other, having multiple machines exchanging messages through a message broker, or even having gird. The question is how do you test these cases automatically.

Karaf provides the admin service, which allows you to create and start instances of Karaf that run inside a separate jvm. Using it from inside the integration test you are able to start multiple instances of Karaf and deploy to each instances the features/bundles that are required for your testing scenario.

Test Scenario:
Let's assume that you want test a scenario where you have on jvm that acts as a message broker, one jvm that acts as a camel message producer and one jvm that acts as a camel message consumer. Also let's assume that you don't run vanilla karaf, but FuseESB (which is an Enterprise version of ServiceMix, which is powered by Karaf). You can use the admin service from inside your integration test just like this.


You might wonder now how are you supposed to assert that the consumer actually consumed the messages that it was supposed to. There are a lot of things that could be done here:

  • Connect to consumer node and use camel:route-info
  • Connect to the consumer and check the logs (if its supposed to log something)
  • Get the jmx stats of the broker

You can literally do whatever you like.

So here's how it would like if we go with using the camel commands:


Pointers to working examples
I usually provide sources for the things I blog. In this case I'll prefer to point you to existing OSS projects that I have been working on and are using paxeaxam-karaf for integration testing.

Fuse Fabric integration tests: Contains test that check the installation of features, availability of services, distributed OSGi services, even the creation and use of agents in the cloud.

Jclouds-Karaf integration tests: Contains simple tests that verifies that features are installed correctly.

Apache Karaf Cellar integration tests: Contains distributed tests, with quite a few Hazelcast examples.

1/23/12

Installing services in the cloud with jclouds

Prologue
I spent the whole today stuck on an issue with jclouds and I thought that it would be a good idea to blog about it so that others don't have to spend so many hours on it.

The issue
My target was to write an integration test, that will start a node on Amazon EC2 and install a service that would be used for the integration test. So I created a script that performed a curl to download the tarball of the service unpack the service and run the service. So far so good. The problem I encountered was that my invocations on the method runScriptOnNode (a jclouds method for invoking scripts on remote nodes) timed out after waiting for 10 minutes. However, the script only needed 1 minute and was successfully executed.

Diving into jclouds run script methods
After spending some time to make sure that no network issues, like firewalls and such where involved, I decided to examine in depth how the runScriptOnNode method works.

Jclouds uses an initialization scripts, which installs the target script to the node and invokes it. The initialization script keeps track of the targets script pid and is able to tell if the target script has completed its execution. So the runScirptOnNode will block for as long as the initialization scripts replies that the target script is running.

Where's the catch?
The initialization script keeps track of the target scripts PID by executing findPid which is ps and grep using a pattern which matches the execution path. That's not a problem by itself, but if you install your service and run it inside the same folder, then initialization script will get confused and won't be able to tell when if the target script finished its execution. As a result the runScirptOnNode method will block till it times out.

The figure above displays a setup that can have problems. In this setup the init script will query the status of the target script by performing a ps and using the jc1234 to filter out processes. However, if a new process is started under that folder (by the target script), say folder service, then the init script will not be able to properly detect when target script finished. That's because the findPid will now return the pid of the service.


Lessons learned
Never start a service inside the same folder where the target script is executed, make sure you unpack and run your service from inside an other folder. Even better use a framework for installing the service (e.g. Apache Whirr) for installing mainstream service and only put your fingers on it if you really have to.