11/5/11

Cloud Notifications with Apache Camel

Prologue
Yesterday I was having a talk with Adrian Cole and during our talk he had an unpleasant surprise. He found out that he forgot a node running on his Amazon EC2 for a couple of days and that it would cost him a several bucks.

This morning I was thinking about his problem and I was thinking of ways that would help you avoid situations like this.

My idea was to build a simple project that would notify you of your running nodes in the cloud via email at a given interval.

This post is about building such as solution with Apache Camel, which help you integrate very easily with both your cloud provider and of course your emai. The full story and the sources of this project can be found below.

Working with recurring tasks
Apache Camel provides a quartz component, which will allow you schedule a task with a given interval.
It is really simple to use. In our case a one hour interval sounds great. Also we want an unlimited time of executions (repeatCount=-1) so it could be something like this.

Using Camel to integrate to your Cloud provider
Camel 2.9.0 will provide a jclouds component, which will allow you to use jclouds, to integrate with most cloud key/value engines & compute services. I am going to use this component, to connect to my cloud provider (I will use my EC2 account, but it would work with most cloud providers)

My first task is to create a jclouds compute service and pass it to the Camel jclouds component.  This will allow me to use jclouds inside my camel routes.


To avoid providing my real credentials I've used property place holders and keep the real credentials in a properties file.

Now that the component is configured I am ready to define my route. The route will use Camel jclouds compute producer to send a request to my cloud provider and ask how many nodes are currently running.  This query can be further enhanced with other parameters such as group (get me all the running nodes of group X) or even image (get me all the running nodes of group X that use image Y). 

All I have to do is add the following element to my route.



The out message will contain a set in each body with all the metada of the running nodes.

Filtering the results
I don't want to fire an email every time I ask my cloud provider about the running nodes, but only when there is actually a running node. The best way to do so is to use the Message Filter EIP pattern. I am going to use that in order to filter out all messages that have a body which contains an empty set.


Sending the email
This is the easiest part, since the only thing I need to specify are the sender, the target & the subject of the email. I can do it simply but adding headers to the message. Finally I need to specify the smtp server and the credentials required for using it.


Now all we need to do is set the destination endpoint inside the message filer.


Running the example
The full source of this example can be found at github. The project is called cloud notifier.
You will have to edit the property file camel.properties in order to add the credentials for your cloud provider and email account.
In order to run it all you need to do is type mvn camel:run.

If you have a couple of nodes running, the result will look like this.





Enjoy!

Conclusions
The camel-jclouds component is really new, it will be part of 2.9.0 releasem however it already provides some really cool features. It also provides the ability to create/destroy or run scripts on your nodes from camel routes. Also it leverages jclouds blobstore API in order to integrate with cloud provider key value engines (e.g. Amazon S3)
Can you imagine executing commands in the cloud using your mobile phone and sms message? (Camel also supports protocols for exchanging sms).

I hope you find all these really useful.

Edit: While I was writing this simple app, to my surprise I found out a forgotten instance myself!

10/10/11

My JavaOne talk about Cellar

Prologue
I am currently returning home from JavaOne 2011. I am at the airport of Munich waiting for my connecting flight to Athens. Once again the flight my flight is delayed and its a great chance to blog a bit about JavaOne.

Apache Karaf Cellar at JavaOne 2011
I had the chance to make a BOF about Karaf Cellar last Tuesday night. Even though the presentation was really late (20:30) and there were a lot of parties going on at this time (actually I was at the Jboss party right before my presentation) there were quite a few people that attended. The best part was that most of the people who attended were really eager to hear about Karaf & Cellar and I received a lot of great "straight to the point" questions. So I really enjoyed the talk and had a lot of fun.


I was worried that I would be really nervous, since I am not that used at public speaking, but I think the drinks I had in the Jboss party did the trick.



After the talk
Right after the talk I had the chance to have a few more drinks with Marios Trivizas, Chris Soulios, Adrian Cole, Chas Emerick & Toni Batchelli.



The FuseSource Booth
Apart from talking and attending other sessions at JavaOne, I also had the chance to spent a lot of time at the booth of FuseSource. Great chance to meet with people enjoying our services and also to talk with people interested in learning more about FuseSource Products & FuseSource success stories.

"There is no place like home"
Well, actually there is and is called San Francisco, but now I am back home & ready to dive into open source. I hope I'll have the chance to be there next year.

6/5/11

JClouds & OSGi

OSGi in the clouds
The last couple of years OSGi and Cloud Computing are two buzz words, that you don't see go hand in hand that often. JClouds is going to change that, since 1.0.0 release is OSGi ready and it also provide direct integration with Apache Karaf.

JClouds in the Karaf
The last couple of weeks I have been working with the jclouds team in order to improve the OSGification of jclouds and also to provide integration with Apache Karaf.

I will not go into much detail in this post, since there is a wiki. I will add however a small demo that shows how easy it is.



A Cloud, a Karaf and a Camel
The fact that JClouds is now OSGi ready opens up new horizons. Apache Camel is one of them. I have been working on a Camel Component that leverages JClouds blobstore abstraction, in order to provide blobstore consumers and producers via Apache Camel.

Hopefully, abstractions for Queues and Tables will follow...

You can find it and give it a try on my github repository.




5/7/11

Apache Karaf Cellar

Prologue
In some previous blog post, I designed and implemented Cellar (a small clustering engine for Apache Karaf powered by Hazelcast). Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.

This post will provide a brief description of Cellar as it is today.

Cellar Overview
Cellar is designed so that it can provide Karaf the following high level features

  • Discovery
      • Multicast 
      • Unicast
  • Cluster Group Management
      • Node Grouping
  • Distributed Configuration Admin
      • per Group distributed configuration data
      • event driven distributed / local bridge
  • Distributed Features Service
      • per Group distributed features/repos info
      • event driven distributed / local bridge
  • Provisioning Tools
      • Shell commands for cluster provisioning
The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest group members.


Each group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that it belongs to and broadcast the event to the groups that whitelist the specific event. 

The broadcast operation is happening via the distributed topic provided by the group. For the groups that the broadcast is supported, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.

Supported Events
There are 3 types of events:
  • Configuration change event
  • Features repository  added/removed event.
  • Features installed/unistalled event.
For each of the event types above a group may be configured to enabled synchronization, and to provide a whitelist / blacklist of specific event ids.

Example
The default group is configured allow synchronization of configuration. This means that whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed memory of the default group and will also be broadcasted to all other default group members using the topic.

This is happening for all PIDs but org.apache.karaf.cellar.node which is marked as blacklisted and will never be written or read from the distributed memory, nor will broadcasted via the topic. 

Should the user decide, he can add/remove any PID he wishes to the whitelist/blacklist.

Syncing vs Provisioning
Syncing (changing stuff to one node and broadcast the event to all other nodes of the group) is one way of managing the cellar cluster, but its not the only way.

Cellar also provides a lot of provisioning capabilities. It provides tools (mostly via command line), which allow the user to build a detailed profile (configuration and features) for each group.

Cellar in action
To see how all of the things described so far in action, you can have a look at the following 5 minute cellar demo:



Note: The video was shoot before Cellar adoption by Karaf, so the feature url, configuration PIDs are out of date, but the core functionality is fine.


I hope you enjoy it!



4/18/11

Introduction to OSGi and Karaf at JHUG

EDIT: The video links have been updated. New videos provided with improved quality.


Presented on OSGi and Apache Karaf on Java Hellenic User Group.

It was a great event with very interesting presentations. The full list of presentations can be found here.

Regarding my presentation, I was a bit nervous at first, since I hadn't practiced my "presentation" skills for a while, but things got better as time went by. I've had the chance to meet a lot of interesting people and discuss about  OSGiApache Karaf & Apache ServiceMix.
The sildes of the presentation can be found at: Slide Share.

Apache Karaf Demonstration
Due to time constraints and the extended introduction to OSGi (as the community requested) I didn't have the chance to provide a proper Karaf demonstration. However, I made a demo video which I hope to fill the gap. The video can be found at Slide Share Karaf Demo or you can download it from my Google Site.

I hope you enjoy both the presentation and the video.

3/3/11

Karaf clustering using Hazelcast

Edit: The project "cellar" has been upgraded with a lot new features, which are not described by this post. A new post will be added soon.


Prologue
I have been playing a lot with Hazelcast lately, especially pairing it with Karaf. If you haven't done already you can read my previous post on using Hazelcast on Karaf.

In this post I am going to take things one step further and use Hazelcast to build a simple clustering engine on Karaf.

The engine that I am going to build will have the following features:
  • Zero Configuration Clustering
      • Nodes discover each over with no configuration.
  • Configuration Replication
      • multicasting configuration change events.
      • configurable blacklist / whitelist by PID.
      • lifecycle support (can be enabled/disabled using shell).
  • Features Repository & State Replication
      • multicasting repository events (add url and remove url).
      • multicasting features state events.
      • configurable blacklist / whitelist by feature.
      • lifecycle support (can be enabled/disabled using shell).
  • Clustering Management
      • distributed command pattern implementation.
      • monitoring and management commands.
Overall architecture
The idea behind the clustering engine is that for each unit that we want to replicate, we create an event, broadcast the event to the cluster and hold the unit state to a shared resource, so that the rest of the nodes can look up and retrieve the changes.



Example: We want all nodes in our cluster to share configuration for PIDs a.b.c and x.y.z. On node "Karaf A" a change occurs on a.b.c. "Karaf A" updates the shared repository data for a.b.c and then notifies the rest of the nodes that a.b.c has changed. Each node looks up the shared repository and retrieves changes.

The role of Hazelcast
The architecture as described so far could be implemented using a database/shared filesystem as a shared resource and polling instead of multicasting events. So why use Hazelcast?

Hazelcast fits in perfectly because it offers:
  • Auto discovery
      • Cluster nodes can discover each other automatically.
      • No configuration is required.
  • No single point of failure
      • No server or master is required for clustering.
      • The shared resource is distributed, hence we introduce no single point of failure.
  • Provides distributed topics
      • Using in memory distributed topics allows us to broadcast events / commands the are valuable for management and monitoring.
In other words Hazel cast allows us to setup a cluster with zero configuration and no dependency to external systems such as a database or a shared file system.

The implementation
For implementing all the above we have the following entities:
  • OSGi Listener An interface the implements a listener for specific OSGi events (e.g. ConfigurationListener)
  • Event The object that contains all the required information required to describe the event (e.g. PID changed). 
  • Event Topic The distributed topic use to broadcast events. It is common for all event types.
  • Shared Map The distributed collection the serves as shared resource. We use one per event type.
  • Event Handler The processor the processes remote event received through the topic.
  • Event Dispatcher The unit the decides which event should be processed by which event handlers.
  • Command A special type of event that is linked to a list of events that represent the outcome of the command.
  • Result A special type of event that represents the outcome of a command. Commands and results are correlated.


The OSGi spec in a lot of situations describe Events and Listener (e.g. ConfigurationChangeEvent and ConfigurationListener).By implementing such Listener and expose it as an OSGi service to the Service Registry I make sure that we "listen" to the events of interest.

When the listener is notified of an event it forwards the Event object to a Hazel cast distributed topic. To keep things as simple as possible I keep a single topic for all event types. Each node has a listener registered on that topic and gets sends all events to the event Dispatcher.

The Event Dispathcer when receives an event it looks up an internal registry (in our case the OSGi Service Registry), in order to find and Event Handler that can handle the received Event. If a handler is found then it receives the event and processes it.

Broadcasting commands
Commands are a special kind of events. They imply that when they are handled a Result event will be fired, that will contain the outcome of the command. So for each command we have one result per recipient. Each command contains a unique id (unique foe all cluster nodes, create from Hazelcast). This id is used to correlate the request with the result. For each result successfully correlated the result is added to list of results on the command object. If the list gets full or if 10 seconds from the command execution have elapsed, the list is moved to a blocking queue from which the result can be retrieved.

The snippet below shows what happens when a command is sent for execution.


Using the source.
I created a small project that demonstrates all of the functionality described above and have uploaded it to github, so that I can share it with you, receive feedback and discuss about it. The project is called cellar. I couldn't find a more appropriate name to give to a cluster of Karafs.

Once you build the source you can install it as a karaf feature and then you are ready to use the cluster shell commands.




In the image above I start two karaf instances, I install cellar to both and then I list the cluster members and I disable the configuration event handler of the first node.

The rest is up to you to explore,
Enjoy!

2/28/11

Hazelcast on Karaf

Prologue
The last months Hazelcast caught my attention. I first saw the JIRA of the camel-hazelcast component, then I read about it, I run some examples and eventually I fell in love with it.

If you are not already familiar with it, Hazelcast is an opensource clustering platform, which provdies a lot of features such as:

  • Auto discovery
  • Distributed Collection
  • Transactions
  • Data Partitioning
You can visit the Hazelcast Documentation for more information.


In this blog post I will show how to run hazelcast on Apache Karaf or Apache ServiceMix and I will provide an example application that creates a hazelcast instance, deploys the hazelcast monitoring web application and adds a couple of shell commands on Apache Karaf.

Finally, I will create a Hazelcast Topic using blueprint and we will create a clustered echo command using that topic.

For all of the above I will provide the full source so that you can try it yourself.

Hazelcast & OSGi 
According to the hazelcast website hazelcast is not yet OSGi ready (it is still in the TODO list). However, I found that versions 1.9.x are ready enough to get you going. In this post I will use the current trunk of hazelcast source (1.9.3-SNAPSHOT) for which I have created a couple of patches for the web-console and for some other minor issues.


Hazelcast Instance as an OSGi service
Even though that Hazelcast requires zero configuration, I found it best to create a Hazelcast instance using Spring, pass the desired configuration and finally expose the instance as an OSGi service.



In the snippet above I am using a minimal configuration which only set the credentials of the Hazelcast. Hazelcast has no dependencies so the only thing required is the hazelcast bundle and the hazelcast monitoring war (if you wish to have access to the web console). From the Karaf shell you can just type:


Once the hazelcast and its monitoring are started, you can browse the hazelcast monitoring at http://localhost:8181/hazelcast-monitor. Which looks like the page below.



Building a distributed collection using the Blueprint
To create a distributed collection with hazelcast all you need is an instance and a unique String identifier that will be used to uniquely identify the collection. Since we already have created an instance and exposed it as an OSGi service the rest are pretty easy:



We will use this distributed topic to build a distributed echo command (A command that will print messages in the console of all nodes). Now we need two simple things:

  1. A listener on that topic that will listen for messages and display them
  2. A shell command that will put messages to the topic.
A listener could be as simple as this:

This is a simple pojo that contains a topic and acts as MessageListner on that topic. For each message added to the topic this listener displays it to the standard output. We could add this pojo to our blueprint xml


What's left to be done is to create the command that will actually display the message that is added to the topic.


Putting it them all together
We can now start two karaf nodes either on the same machine or on separate machines in the same network, deploy hazelcast and its monitoring and finally deploy the instance, the topic and the commands as we did so far.

Let's try the command:




Using the full source
The code can be found on github at https://github.com/iocanel/blog/tree/master/karaf-hazelcast. It consist of 3 manve modules:

  1. instance (it contains a spring dm descriptor which creates the instance).
  2. shell (A shell module which contains a couple of hazelcast commands included the echo).
  3. feature (A feature descriptor for easier installation of the above modules and their deps).
Once you build the project from the karaf shell you can run:


Enjoy!

Note: I am planning to blog more on the subject if I have the time, so stay around.