11/26/13

Thoughts on Blueprint and Declarative Services: Dependency injection or Dependency management

I've been using the OSGi Blueprint for a couple of years now and I have been happy with it. Blueprint is the obvious choice inside Apache Karaf and since it was a solution that is generally working well I never needed to look for alternatives.

Earlier this year I had the chance to watch a presentation by Scott England Sulivan which included a demo of a small project using OSGi Declarative Services which made think that I should start looking closer at OSGi Declarative Services.

So here are some thoughts about the two different approaches for dependency injection/management in OSGi.

Blueprint
Blueprint is a dependency injection solution for OSGi. It is almost identical to the Spring Framework with extra support for OSGi Services (in fact it was inspired by the Spring Framework). Its resemblance to Spring makes it dead simple to use, especially if you are already familiar with Spring.

Blueprint handles for you injection of OSGi services taking the availability of the services into consideration. When using services that are considered optional a proxy for that service will be created and injected. Calls to that service will block until the service becomes available or timeout.

Declarative Services
Declarative Services is a component model that simplifies the creation of components that publish or consume OSGi services. I don't consider Declarative Services a dependency injection solution, as it is more like a component model with dependency management capabilities.

In Declarative Services you define component and its dependencies in a declarative way and the framework, will manage the lifecycle of the component based on wether its dependencies are satisfied or not. This means that a component will only get activated when all the dependencies are satisfied and will deactivate whenever a dependency is gone. So it is 100% free of proxies, but still guarantees that as long as a component is active, its internal dependencies will be reachable.

Proxy-ing vs Cascading
One of the main differences between the two approaches in questions is that Blueprint uses Proxies and Declarative Services uses a cascading approach (activate / deactivate components based on the dependency availability). I tend to prefer cascading instead of proxy-ing (not only because proxies are out of fashion but ...), mostly because when using proxies you have no clue of what is that state/availability of the underlying object. Cascading on the other hand seems to be more fitting inside OSGi, as it handles better the dynamic nature of the framework.

A typical case where proxies cause head aches is when you need to use a reference listener for an optional service. If you need to react on the service object when it goes away (unbinds) it will just fail.

An other issue with proxies is that they will allow you to publish services that their dependencies are not satisfied yet. Calls the the service may end up hung, because the proxy is waiting for the unsatisfied dependency to become available. This prevents you from failing fast and not being able to fail fast can even compromise your entire system (if its not obvious why you may want to have a read at the excellent book Release It!).

An example
Let's examine closer the approaches above using a close to real world example. Let's assume that we are building a simple CRUD web application for Items.


A simple CRUD application for "Items"

The application can be composed of the following parts:

  • The presentation layer
  • The item service
  • The data store
  • The database
In OSGi the presentation layer could be a servlet registered in the Http Service, The Item Service could be an OSGi service that encapsulates the logic and the DataStore could also be an OSGi service that can be used for interacting with the database.

As shown in the diagram above, the web application depends on the item service and the item service depends on the datastore. 

What happens if the datastore is not configured or available? 

Proxying:

With the proxying approach the Item Service will get injected a proxy to the Data Store, that will block if the Data Store is not available. The Item Service will be registered to the Service Registry and the Web Application will try to use it, even if there is no Data Store. Requests to the Web Application may end up blocked waiting for the Data Store (which is not ideal).

Cascading:

With the cascading approach the Item Service will only be registered when a Data Store is present. In the same manner the Web Application will only be available if the Item Service is available. So we have a guarantee, that the Web Application will be available if all dependencies in the hierarchy are satisfied. Requests made while there are unsatisfied dependencies will result in an HTTP 404 error which gives a "fail fast" behaviour.  Note that whenever the Data Store becomes available both the Item Service and Web Application will automatically detect the change and also become available themselves.


Final Thoughts

Declarative Services is a great tool for managing the dependencies of your components. The cascading approach can be a valuable tool for building robust dynamic, modular applications.

Blueprint is really easy to use and will feel natural to users familiar with spring. Also the proxying behaviour in some cases can be also useful.

So when to use one or the other?

I tend to think that Blueprint shines in cases, where you are constructing components that either have no service dependencies or are not exposed themselves as services. An other case is when "waiting for the service" is more natural. Such examples can be:

  1. shell commands (usually the are not used as services by other components)
  2. camel routes (waiting for dependencies is desired)
In cases where there are long dependency chains, components are highly dynamic etc I think that declarative services is a better choice.

Regardless of your choice, you need to know the strengths and weaknesses of your tools!

I hope that this was helpful !!!



7/5/13

Hawtio & Apache Jclouds

Introduction

I've spent some time lately working on an Apache Jclouds plugin for Hawtio. While there is still a lot of pending work, I couldn't hold my excitement and wanted to share...

What is this Hawtio anyway?

Whenever a cool open source project is brought to my attention, I usually subscribe to the mailing lists so that I can get a better feel of the projects progress, direction etc. Sooner or later there is always an email with the topic "[Discuss] - Webconsole for our cool project".

Such emails quite often end up in a long discussion about what's the best web framework to use, what should be the target platform and how the console could be integrated with upstream/downstream projects.

A very good example is Apache ServiceMix. ServiceMix runs on Apache Karaf, which runs on Apache Felix and also embeds Apache ActiveMQ and every single of those projects has its own webconsole.

The number of consoles grows so big, that users have to hire a personal assistance just to keep track of the URLs of each web console. Well, maybe that's an overstatement but you get the idea. And if we also take into consideration that some projects are bound to a specific runtime, while others are not then we have a perfect webconsole storm.

Hawtio solves this problem, by providing a lightweight HTML5 modular web console with tons of plugins. Hawtio can run everywhere as its not bound to a specific runtime, and its modular, which means that its pretty easy to write and hook your own plugins.

Writing plugins for Hawtio

Hawtio is a full client framework. Whenever it requires communication with the backend it can use rest. To make things easier it also use Jolokia that exposes JMX with JSON over HTTP. This makes it pretty easy to hook frameworks even if they don't already provide a rest interface, but expose things over JMX.

Once, the communication with the backend is sorted, its pretty easy to create a plugin. Hawtio uses AngularJS, which makes development of webapps a real pleasure.

The Jclouds plugin

Apache Jclouds doesn't have yet a rest interface, nor it has JMX support. Well actually it has pluggable JMX support as of 1.6.1-incubating release. All you need to do is to create a Apache Jclouds Context using the ManagementLifecycle module:



Note: Users that are using the jclouds-karaf project, will get that for free (no need to do anything at all).

When the ManagementLifecycle plugin is used, it will create and register Apache Jclouds MBeans to JMX. If those mbeans are discovered by Hawtio, a new tab will be added in the Hawtio user interface:

The main Jclouds plugin Page


The EC2 Api details page

From there the user is able to browse all installed Apache Jclouds providers, apis and services. If for example, you have created a compute service context with the MangementLifecycle module, you'll be able to see in it under the "Compute Service" tab:

List of Compute Services - Amazon AWS & A Stub service.

By selecting one of the available services, a details bar appears, which helps you navigate to all service specific tabs. For a compute service its:

Nodes

A detailed list of all running nodes, with the ability to reboot, destroy, suspend & resume a node.

Nodes
Images
A list of images, with an operating system filter.

Images





Locations
A list of all assignable locations



The plugin is not compute service specific. It also supports blobstores. For example, here a view of one of my S3 buckets:

A blobstore browser
Mix & match

What I really love about Hawtio is that has a wide range of out of the box plugins that you can mix & match. Here's an example:

"A couple of years ago I created an example of using Jclouds with Apache Camel to automatically send email notifications about running instances in the cloud."

Hawtio also provides an Apache Camel plugin, so we can visually see, edit or modify the example that sends the notifications. The great thing is that in this example we are using a Hawtio managed compute service:

The original example can be found at Cloud Notification with Apache Camel.
Visual representation of a route that polls EC2 for running instances and sends email notifications

An other cool plugin that can be used along with the jclouds plugin is the "Logs plugin", The log plugin let's you search, browse and filter your logs and even see the source that is associated with the log entries:

Searching the logs for jclouds related errors

The code that generated the log entry
Epilogue

This is only a first draft of the jclouds plugin and there are more cool things to be added, like executing scripts, downloading blobs and also have a better way of creating new services (the last is already supported but could be really improved).

If you want to see more of Hawtio in action you can have a look at James Strachan's demonstation of a Camel based iPaas which is basically Hawtio + Fuse FabricApache Camel.





5/11/13

Advanced Integration Testing using Fuse Fabric at Camel One 2013

I've already posted twice in the blog about integration testing using Pax-Exam and Karaf. Surprisingly these two posts are among the most popular in this blog and I was considering writing a third part that would focus on Fuse Fabric.

The idea was to write about using Fuse Fabric to create and manage distributed containers for your integration tests.  

Finally, I decided give a presentation about this instead.

So if you are joining CamelOne 2013, you'll have the chance to attend to my presentation and learn more about:
  • Writing integration tests for OSGi applications.
  • Using Fuse Fabric to manage & coordinate distributed tests.
  • Scaling your tests to a large number of remote containers.