tag:blogger.com,1999:blog-17866158184829173242023-11-16T07:49:27.565-08:00Ioannis Canellos BlogAnonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.comBlogger38125tag:blogger.com,1999:blog-1786615818482917324.post-46721076096410563632016-10-22T05:32:00.003-07:002016-10-26T03:19:00.277-07:00Voxxed Thessaloniki 2016<h3>
Intro</h3>
I am back home after a great <a href="https://voxxeddays.com/thessaloniki/">Voxxed Days</a> event in Thessaloniki and its a good chance to write a blog post about it <i>(something I haven't done in ages)</i>. It was one of the most well organized conferences I've ever been so 'kudos' to organization team.<br />
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij1ufmP6fd0raTPasDz4qMNclmtpmMKlDCCOXgn-KJ3jDBD6NoLDglz6f9081yS35uGZ_mmlcM9n0-iYXLEvl2T48dwBA4vY577d1hawThX_F4w20jEeiAHgJvcfvUcmwDoE5kn649kG0/s1600/14330985_10210450406916749_858546220_n-640x360.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij1ufmP6fd0raTPasDz4qMNclmtpmMKlDCCOXgn-KJ3jDBD6NoLDglz6f9081yS35uGZ_mmlcM9n0-iYXLEvl2T48dwBA4vY577d1hawThX_F4w20jEeiAHgJvcfvUcmwDoE5kn649kG0/s320/14330985_10210450406916749_858546220_n-640x360.jpg" width="320" /></a></div>
<div>
<br />
<div>
<h3>
My talk</h3>
</div>
<div>
I spoke about '<i>getting started with microservices on kubernetes</i>' and you can find my reveal.js presentation on <a href="https://github.com/iocanel/voxxedthess2016">github</a> or if you prefer the dockerized version on <a href="https://hub.docker.com/r/iocanel/voxxed-thess/">dockerhub</a>.</div>
<div>
<br /></div>
<div>
The talk itself went great and I really enjoyed it, especially after the ramp up time! I'll add some photos as soon as I get some.</div>
<div>
<br /></div>
<div>
<b>Errata</b>: I'd like to correct myself before anyone else does: "<i>A pod will not get recreated if a container fails the liveliness probe, instead the container will restarted based on its restart policy.</i>".</div>
<div>
<br /></div>
<div>
<h3>
What I liked?</h3>
</div>
</div>
<div>
Conferences are all about people, and this conference had it all! I met lots of great people and also had the chance to meet a lot of friends that I haven't seen for a very long time. </div>
<div>
<br /></div>
<div>
The place (rooms, surrounding) was also great <i>(though as a speaker I did find the cinema room with the gargantuan screen a little bit intimidating. Just joking! Well actually not!)</i>.</div>
<div>
<br /></div>
<div>
I haven't talked about the excellent organization, haven't I?</div>
<div>
<br /></div>
<div>
<h3>
What I didn't like</h3>
</div>
<div>
The fact that I was talking at the same time with <a href="https://twitter.com/PeterHilton">Peter Hilton</a> and I couldn't attend his talk <i>(which from what I've heard was phenomenal)</i>.</div>
<div>
<br /></div>
<div>
<div>
<h3>
Voxxed Days Athens</h3>
</div>
</div>
<div>
In the end of the conference <a href="https://voxxeddays.com/athens/">Voxxed Days Athen</a>s was announced and I am really looking forward to it!</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
</div>
Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-92012943276116984012016-01-28T04:35:00.002-08:002016-01-28T04:36:44.926-08:00A kubernetes workflow plugin<b>Intro</b><br />
<br />
The last couple of months I've been experimenting with <a href="https://jenkins-ci.org/">Jenkins</a> and how to best integrate it with <a href="https://www.docker.com/">Docker</a> and <a href="http://kubernetes.io/">Kubernetes</a>. A couple of months ago I even blogged about possible setups that involve the use of the <a href="https://github.com/jenkinsci/docker-workflow-plugin">Docker Workflow Plugin</a> inside <a href="http://kubernetes.io/">Kubernetes</a> <i>(you can find the post <a href="http://iocanel.blogspot.gr/2015/09/jenkins-setups-for-kubernetes-and.html">here</a>)</i>.<br />
<br />
<div style="text-align: justify;">
While the <a href="https://github.com/jenkinsci/docker-workflow-plugin">Docker Workflow Plugin</a> is really great, it still doesn't cover some special needs that a <a href="http://kubernetes.io/">Kubernetes</a> user might have, such as <a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html">secrets</a>. A typical workflow its more than likely to need to access remote repositories, either to checkout code, push artifacts etc and using <a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html">secrets</a> in <a href="http://kubernetes.io/">Kubernetes</a> is the cleanest and more secure way to share credentials for those resources. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Not being able to use <a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html">secrets</a> was pretty much a blocker for us and we desperately needed it for <a href="http://fabric8.io/guide/fabric8DevOps.html">Fabric8 DevOps</a>. So, we though that we should migrate the concept of running builds inside containers, to running builds inside <a href="http://kubernetes.io/v1.1/docs/user-guide/pods.html">pods</a>, which lead to implementation of the <a href="https://github.com/fabric8io/kubernetes-workflow">Kubernetes Workflow Plugin</a>.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b style="text-align: start;">The Kubernetes workflow plugin</b></div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Here is small snippet that demonstrates how you can use <a href="https://github.com/fabric8io/kubernetes-workflow">Kubernetes Workflow Plugin</a> in order to create a pod in order to perform a maven build:</div>
<br />
<script src="https://gist.github.com/a5ade9a2f0de5c2ac548.js"></script>The beauty of it is that you can just use the standard <a href="https://hub.docker.com/_/maven/">maven</a> image and run your build inside it (as one would do with the <a href="https://github.com/jenkinsci/docker-workflow-plugin">Docker Workflow Plugin</a>). On top of that it allows you to mount your gpg keys using a <a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html" style="text-align: justify;">secrets</a> <a href="http://kubernetes.io/v1.1/docs/user-guide/volumes.html">volume</a>.<br />
<br />
A detailed list of the plugins features:<br />
<ul>
<li><b>Running Builds inside Pods</b></li>
<ul>
<li>Environment variables</li>
<li>Privileged containers</li>
<li>Volumes</li>
<ul>
<li>Secrets</li>
<li>Host Path Mounts</li>
<li>Empty Dir Mounts</li>
</ul>
</ul>
</ul>
<ul>
<li><b>Manipulating Docker Images</b></li>
<ul>
<li>Building</li>
<li>Tagging</li>
<li>Pushing</li>
</ul>
</ul>
<div>
<br /></div>
<div style="text-align: justify;">
<b style="text-align: start;">Building, tagging and pushing docker images</b><br />
<br />
The plugin also allows you to build, tag and push images to a docker registry. Here's a snippet that demonstrates how to do it:<br />
<script src="https://gist.github.com/3422fc4546f23256eb49.js"></script>
The example is cloning a <a href="https://nodejs.org/en/">NodeJS</a> project, creating a simple <a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a> for it and then triggering a build. Finally, it tags the built image and pushes it to a <a href="https://docs.docker.com/registry/">Docker Registry</a>. In this example "<b>default</b>" is the project name and "<b>172.30.101.121:5000</b>" is the address of the registry. The example was written against <a href="https://www.openshift.com/">Openshift</a> and the plugin is smart enough to handle authenticating to the <a href="https://www.openshift.com/">Openshift</a>. Of course, it also supports reading auth configuration from "<u><i>${user.home}/.docker/config.json</i></u>" and also specifying it as part of the DSL.<br />
<br />
<b><u>Note</u></b>: The <b>building</b> and <b>pushing</b> of docker images could be handled by the <a href="https://github.com/jenkinsci/docker-workflow-plugin" style="text-align: start;">Docker Workflow Plugin</a> too, <u><i>if</i></u> the docker binaries were available on the node. Why? Because <i>the plugin actually calls the golang docker client via shell</i>. If the step is run on master, the master needs the binaries, if the step is executed on the slave the slave need the binaries, if the step is executed inside the pod, then the pod needs the binaries <i>(which is not ideal)</i>. To gain in flexibility the <a href="https://github.com/fabric8io/kubernetes-workflow">Kubernetes Workflow Plugin</a> uses java to talk to <a href="https://www.docker.com/" style="text-align: start;">Docker</a> instead.<br />
<br />
<b>Stay tuned</b><br />
<br />
More features, post and videos soon ...<br />
<br /></div>
<div style="text-align: start;">
<br /></div>
<div style="text-align: justify;">
<b style="text-align: start;"><br /></b></div>
<div style="text-align: justify;">
<b style="text-align: start;"><br /></b></div>
<div style="text-align: justify;">
<b style="text-align: start;"><br /></b></div>
<div style="text-align: justify;">
<br /></div>
<br />
<br />
<div>
<b><br /></b></div>
<div>
<b><br /></b></div>
Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-42852354644302400592015-09-04T09:32:00.000-07:002015-09-05T00:54:49.929-07:00Jenkins setups for Kubernetes and Docker Workflow<h2>
<span style="font-weight: normal;">Intro</span></h2>
<div style="text-align: justify;">
During the summer I had the chance to play a little bit with <a href="https://github.com/jenkinsci/jenkins/">Jenkins</a> inside <a href="https://github.com/kubernetes/kubernetes">Kubernetes</a>. More specifically I wanted to see what's the best way to get the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin">Docker Workflow Plugin</a> running.</div>
<div>
<br /></div>
<div style="text-align: justify;">
So, the idea was to have a Pod running <a href="https://github.com/jenkinsci/jenkins/">Jenkins</a> and use it to run builds that are defined using <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin">Docker Workflow Plugin</a>. After a lot of reading and a lot more experimenting I found out that there are many ways of doing this, with different pros and different cons each. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
This post goes through all the available options. More specifically:</div>
<div style="text-align: justify;">
<ol>
<li>Builds running directly on Master</li>
<li>Using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin">Docker Plugin</a> to start Slaves</li>
<li>Using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin">Docker Plugin</a> and <a href="https://github.com/jpetazzo/dind">Docker in Docker</a></li>
<li>Using <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin">Swarm</a> clients</li>
<li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin">Swarm</a> with <a href="https://github.com/jpetazzo/dind">Docker in Docker</a></li>
</ol>
</div>
<div>
Before I go through all the possible setups, I think that it might be helpful to describe what are all these plugins.</div>
<h2>
<a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a><span style="font-size: small; text-align: justify;"> </span></h2>
<div>
<span style="font-size: small; font-weight: normal; text-align: justify;">A </span><a href="https://github.com/jenkinsci/jenkins/" style="text-align: justify;">Jenkins</a><span style="font-size: small; font-weight: normal; text-align: justify;"> plugin that is using </span><a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> in order to create and use slaves. It uses http in order to communicate with <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> and create new containers. These containers only need to be java ready and also run <b>SSHD</b>, so that the master can ssh into them and do its magic. There are a lot of images for slave containers over the internet, the most popular at the time of my reattach was the<a href="https://github.com/evarga/docker-images/tree/master/jenkins-slave"> evarga jenkins slave</a>.</div>
<div>
<br /></div>
<div>
The plugin is usable but feels a little bit flaky, as it creates the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> container but sometimes it fails to connect to the slave and retries <i>(it usually takes 2 to 3 attempts)</i>. Tried with many different slave images and many different authentication methods <i>(password, key auth etc)</i> with similar experiences. </div>
<h2>
<a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;"><b>Swarm</b></a></h2>
<div>
Having a plugin to create the slave is one approach. The other is "Bring your own slaves" and this is pretty much what swarm is all about. The idea is that the <a href="https://github.com/jenkinsci/jenkins/" style="text-align: justify;">Jenkins</a> master is running the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a> plugin and the users are responsible for starting the swarm clients (its just a java process). </div>
<div>
<br /></div>
<script src="https://gist.github.com/12c0ae1e103fd94bef4c.js"></script>
The client connects to the master and let's it know that it is up and running. Then the master is able to start builds on the client.<br />
<br />
<h2>
<a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a></h2>
<div>
This plugin allows you to use <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> images and containers in workflow scripts, or in other words execute workflow steps inside <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> containers & create <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> from workflow scripts.<br />
<br />
Why?<br />
<br />
To encapsulate all the requirements of your build in a <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> image and not worry on how to install and configure them.<br />
Here's how an example Docker Workflow script looks like:
<br />
<br />
<script src="https://gist.github.com/f11bc1c577ae7112ea90.js"></script>
<b><u>Note</u></b>: You don't need to use the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> to you the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a>.<br />
<b><u>Also</u></b>: The <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a> is using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> binary. This means that you need to have the docker client installed wherever you intend to use the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a>.<br />
<u style="font-weight: bold;">Almost forgot</u>: The "<i>executor</i>" of the build and the containers that participate in the workflow, need to share the project workspace. I won't go into details, right now. Just keep in mind that it usually requires access to specific paths on the docker host <i>(or some short of shared filesystem)</i>. Failure to satisfy this requirements leads to "hard to detect" issues like builds hunging forever etc. </div>
<div>
<br /></div>
<div>
Now we are ready to see what are the possible setups.</div>
<h2>
No slaves</h2>
<div>
This is the simplest approach. It doesn't involve <a href="https://github.com/jenkinsci/jenkins/" style="text-align: justify;">Jenkins</a> slaves, the builds run directly on the master by configuring a fixed pool of executors.<br />
<br />
Since there are no slaves, the container that runs <a href="https://github.com/jenkinsci/jenkins/" style="text-align: justify;">Jenkins</a> itself will need to have the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> binary installed and configured to point to the actual <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> host.<br />
<br />
<u><i>How to use the docker host inside Kubernetes?</i></u><br />
<u><i><br /></i></u>
There are two approaches:<br />
<ol>
<li>Using the Kubernetes API</li>
<li>By mounting /var/run/docker.sock</li>
</ol>
<div>
You can do (1) by using a simple shell script like the one below.</div>
<br />
<script src="https://gist.github.com/41f2923a9cbddb6d509d.js"></script>
<br />
<div>
You can (2) by specifying a hostDir volume mount on <a href="https://github.com/jenkinsci/jenkins/" style="text-align: justify;">Jenkins</a> POD.</div>
<br />
<script src="https://gist.github.com/bed98f7cd6a830c3d152.js"></script>
<br />
<div>
An actual example of such setup can be found <a href="https://raw.githubusercontent.com/iocanel/jenkins-poc/master/single-docker-daemon/kubernetes.json">here</a>.<br />
<br />
<i><u>Pros</u></i><br />
<br />
<ol>
<li><i>Simplest possible approach</i></li>
<li><i>Minimal number of plugins</i></li>
</ol>
</div>
<br />
<i><u>Cons</u></i><br />
<br />
<ol>
<li><i>Doesn't scale</i></li>
<li><i>Direct access to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="font-style: normal; text-align: justify;">Docker</a> daemon</i></li>
<li><i>Requires access to specific paths on the host (see notes on <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a>)</i></li>
</ol>
<br />
<h2>
Docker Plugin managed Slaves</h2>
<div>
The previous approach doesn't scale for the obvious reasons. Since, <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> and <a href="https://github.com/kubernetes/kubernetes" style="text-align: justify;">Kubernetes</a> are already in place, it sounds like a good idea to use them as a pool of resources.<br />
<br />
<div style="text-align: justify;">
So we can add <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a><span style="text-align: justify;"> and have it create a slave container for each build we want to run. This means that we need a </span><a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> container that will have access to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> binary <i>(docker workflow requirement) </i>and will also mount the workspace of the project from the master.<br />
<br />
As mentioned above the master will need to connect via ssh into the slave. For this to succeed, either credentials need to get configured or the proper ssh keys. In both cases the xml configuration of the docker plugin needs to get updated in order to refer to the id of the <a href="https://github.com/jenkinsci/jenkins/">Jenkins</a> credentials configuration <i>(for example see this <a href="https://github.com/iocanel/jenkins-poc/blob/master/single-docker-daemon/master/image/jenkins/config.xml#L46">config.xml</a>)</i>.<br />
<br />
So what exactly is this id?<br />
<br />
<a href="https://github.com/jenkinsci/jenkins/">Jenkins</a> is using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Plugin">Credentials Plugin</a> to store and retrieve credentials. Each set of credentials has a unique id and other plugins can use this id in order to refer to a set of credentials. For security reasons the passwords, passphrase etc are not stored in plain text, but instead they are encrypted using <a href="https://en.wikipedia.org/wiki/SHA-2">SHA256</a>. They key that is used for encryption is also encrypted so that things are more secure. You can find more details on the subject on this great post on "<a href="http://xn--thibaud-dya.fr/jenkins_credentials.html">Credentials storage in Jenkins</a>".<br />
<br />
What I want you to note, is that due to the way credentials are stored in <a href="https://github.com/jenkinsci/jenkins/">Jenkins</a> its not trivial to create a master and a slave image that talk to each other, without human interaction. One could try to use scripts like:<br />
<br />
<script src="https://gist.github.com/45ceb0b688aa19a8ea67.js"></script>
To generate the secret and the master key. And to use them for encrypting a password you can use a script like:
<br />
<br />
<script src="https://gist.github.com/0602fba0bc51b070e0dd.js"></script>
To actually encrypt the passwords. I wouldn't recommend this to anyone, I am just showing the scripts to emphasise on how complex this is.
Of course, scripts like that also make use of details internal to <a href="https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Plugin">Credentials Plugin</a> and also feels a little hacky. What I found a slightly more elegant approach to configure credentials by throwing the following groovy script inside Jenkins <span style="background-color: white; color: #333333; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16px; text-align: start; white-space: pre;">init.groovy.d:</span>
<br />
<br />
<script src="https://gist.github.com/9de5c976cc0bd5011653.js"></script>
The snippet above demonstrates how to create both username/password credentials and also SSH private key with an empty passphrase.
<span style="background-color: white; color: #333333; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16px; text-align: start; white-space: pre;"><br /></span></div>
<div style="text-align: justify;">
<br /></div>
</div>
<i><u>Pros</u></i><br />
<ol>
<li><i>Simple enough</i></li>
</ol>
</div>
<i><u>Cons</u></i><br />
<ol>
<li><i><a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> is currently not there yet?</i></li>
<li><i>Direct access to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="font-style: normal; text-align: justify;">Docker</a> daemon</i></li>
<li><i>Requires access to specific paths on the host (see notes on <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a>)</i></li>
</ol>
Even if we put the issues with the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> aside, I'd still like to go for an approach that wouldn't directly talk to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> daemon that is running behind <a href="https://github.com/kubernetes/kubernetes" style="text-align: justify;">Kubernetes</a>.<br />
<h2>
Docker Plugin managed Slaves with D.I.N.D.</h2>
<div>
Why would one want to use <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a> in <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a>?<br />
<br />
In our case in order to avoid going behind <a href="https://github.com/kubernetes/kubernetes" style="text-align: justify;">Kubernetes</a> back.<br />
<br />
The number of possibilities here grows. One could use DIND directly on the <a href="https://github.com/kubernetes/kubernetes" style="text-align: justify;">Kubernetes</a> master, or one could combine it with the <i><a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> </i>so that each slave runs its own daemon and be 100% isolated.<br />
<br />
Either way, what happens during the build is completely isolated from the rest of the world. On the other hand it does require the use of <a href="https://docs.docker.com/reference/run/#runtime-privilege-linux-capabilities-and-lxc-configuration">privileged</a> mode. This can be an issue as the mode may not be available in some environments <i>(i.e. it wasn't available on <a href="https://cloud.google.com/container-engine/">Google Container Engine</a> last time I checked).</i><br />
<i><br /></i>
<i>Note: By hosting a docker daemon in the slave, frees us from the requirement of using volume mounts on the outer docker (remember, only the executor and the workflow steps need to share workspace).</i>
<i><br /></i>
<br />
<div>
<i><u><br /></u></i>
<i><u>Pros</u></i><br />
<ol>
<li><i>100% Isolation</i></li>
<li><i>Doesn't require access to specific paths on outer docker!</i></li>
</ol>
</div>
<i><u>Cons</u></i><br />
<ol>
<li><i>Complexity</i></li>
<li><i>Requires Privileged Mode</i></li>
<li><i>Docker images are not "cached"</i></li>
</ol>
</div>
<h2>
Using Swarm Clients</h2>
<div>
D.I.N.D. or not one still has to come up with a solution for scaling and <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> so far doesn't seem like an ideal solution. Also the equivalent of the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin" style="text-align: justify;">Docker Plugin</a> for <a href="https://github.com/kubernetes/kubernetes" style="text-align: justify;">Kubernetes</a> (the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin">Kubernetes Plugin</a>) does seem that it needs a little more attention. So we are left with <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a>.<br />
<br />
Using the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a> does seem like a good fit, since we are using <a href="https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin">Kubernetes</a> and its pretty trivial to start N number of containers running the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a> client. We could use a replication controller with the appropriate image.<br />
<br />
<div>
<i><u>Pros</u></i><br />
<ol>
<li><i>Fast</i></li>
<li><i>Scaleable</i></li>
<li><i>Robust</i></li>
</ol>
</div>
<i><u>Cons</u></i><br />
<ol>
<li><i>Slaves need to get managed externally.</i></li>
<li><i>Requires access to specific paths on the host (see notes on <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker Workflow Plugin</a>)</i></li>
</ol>
</div>
<h2>
Using Swarm Clients with D.I.N.D.</h2>
<div>
The main issue with D.I.N.D. in the this use case, is the fact that the images in the "in <a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Workflow+Plugin" style="text-align: justify;">Docker</a>" are not cached. One could try to experiment with sharing the <a href="https://docs.docker.com/registry/"><span style="text-align: justify;">D</span>ocker Registry</a> but I am not sure if this is even possible.<br />
<br />
On the other hand with most of the remaining options we need to use hostPath mounts, which may not work in some environments.<br />
<br />
A solution that solves both of the issues above is to combine <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a> with D.I.N.D.<br />
<br />
With <a href="https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin" style="text-align: justify;">Swarm</a> the clients stay <i>(rather than get wiped after each build).</i> This solves the image caching issues.<br />
<br />
Also, with D.I.N.D. we no longer need to use hostPath mounts via Kubernetes.<br />
<br />
So we have a win - win.<br />
<br />
<div>
<i><u>Pros</u></i><br />
<ol>
<li><i>Fast</i></li>
<li><i>Scaleable</i></li>
<li><i>Robust</i></li>
<li><i>100% Isolation</i></li>
<li><i>Images are cached</i></li>
</ol>
</div>
<i><u>Cons</u></i><br />
<ol>
<li><i>Slaves need to get managed externally.</i></li>
</ol>
<div>
<i><br /></i></div>
</div>
<h2>
Closing thoughts</h2>
<div>
I tired all of the above setups as part of a poc I was doing: "<a href="https://github.com/iocanel/jenkins-poc">Jenkins for Docker Workflow on Kubernetes</a>" and I thought that I should share. There are still things I'd like to try like:<br />
<br />
<ul>
<li>Use <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/secrets.md">secrets</a> for authentication to the slaves.</li>
<li>Remove clutter</li>
<li>etc</li>
</ul>
<br />
Feel free to add experiences, suggestions, correction in the comments.<br />
I hope you found it useful.<br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com3tag:blogger.com,1999:blog-1786615818482917324.post-86414797471872370592015-08-25T00:36:00.000-07:002015-08-25T00:36:15.915-07:00Fabric8 Kubernetes and Openshift Java DSL<b>Intro</b><br />
<br />
<div style="text-align: justify;">
The first releases of the <a href="http://fabric8.io/guide/overview.html">Fabric8 v2</a> have been using a JAX-RS based <a href="https://github.com/googlecloudplatform/kubernetes/">Kubernetes</a> client that was using <a href="http://cxf.apache.org/docs/jax-rs.html">Apache CXF</a>. The client was great, but we always wanted to provide something thinner, with less dependencies <i>(so that its easier to adopt)</i>. We also wanted to give it a fecelift and build a DSL around it so that it becomes easier to use and read.</div>
<br />
The new client currently lives at: <a href="https://github.com/fabric8io/kubernetes-client">https://github.com/fabric8io/kubernetes-client</a> and it provides the following modules:<br />
<ol>
<li>A <a href="https://github.com/googlecloudplatform/kubernetes/">Kubernetes</a> client.</li>
<li>An <a href="https://github.com/openshift/origin/">Openshift</a> client.</li>
<li>Mocking framework for all the above (based on <a href="https://github.com/easymock/easymock">EasyMock</a>)</li>
</ol>
<div>
<b>A first glance at the client</b><br />
<b><br /></b></div>
Let's have a quick look on how you can create, list and delete things using the client:<br />
<br />
<script src="https://gist.github.com/c4821ebe5367a2b9ee90.js"></script>
The snippet above is pretty much self explanatory <i>(and that's the beauty of using a DSL), </i>but I still have a blog post to fill, so I'll provide as many details as possible.<br />
<div>
<br /></div>
<div>
<b>The client domain model</b><br />
<br />
You could think of the client as a union of two things:<br />
<ol>
<li>The <a href="https://github.com/googlecloudplatform/kubernetes/">Kubernetes</a> domain model.</li>
<li>The DSL around the model.</li>
</ol>
</div>
<div>
The domain model is a set of objects that represents the data that are exchanged between the client and <a href="https://github.com/googlecloudplatform/kubernetes/" style="text-align: justify;">Kubernetes</a> / <a href="https://github.com/openshift/origin/">Openshift</a>. The raw format of the data is JSON. These JSON objects are quite complex and their structure is pretty strict, so hand crafting them is not a trivial task.<br />
<br />
We needed to have a way of manipulating these JSON objects in Java <i>(and being able to take advantage of code completion etc)</i> but also stay as close as possible to the original format. Using a POJO representation of the JSON objects can be used for manipulation, but it doesn't quite feel like JSON and is also not really usable for JSON with deep nesting. So instead, we decided to generate fluent builders on top of those POJOs that used the exact same structure with the original JSON.<br />
<br />
For example, here the JSON object of a <a href="https://github.com/googlecloudplatform/kubernetes/" style="text-align: justify;">Kubernetes</a> Service:<br />
<br />
<script src="https://gist.github.com/79fc34c1d014be038aa6.js"></script>
The Java equivalent using Fluent Builders could be:<br />
<br />
<script src="https://gist.github.com/60a90261b8dc6f6fa08e.js"></script>
The domain model lives on its own project: <a href="https://github.com/fabric8io/kubernetes-model">Fabric8's Kubernetes Model</a>. The model is generated from <a href="https://github.com/googlecloudplatform/kubernetes/">Kubernetes</a> and <a href="https://github.com/openshift/origin/">Openshift</a> code after a long process:<br />
<ol>
<li>Go source conversion JSON schema</li>
<li>JSON schema conversion POJO</li>
<li>Generation of Fluent Builders</li>
</ol>
<div>
Fluent builders are generated by a tiny project called <a href="https://github.com/sundrio/sundrio">sundrio</a>, which I'll cover in a future post.</div>
<br />
<br />
<b>Getting an instance of the client</b><br />
<b><br /></b>
Getting an instance of the <a href="https://raw.githubusercontent.com/fabric8io/kubernetes-client/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/DefaultKubernetesClient.java">default client</a> instance is pretty trivial since an empty constructor is provided. When the empty constructor is used the client will use the default settings which are:<br />
<br />
<ul>
<li>Kubernetes URL</li>
<ol>
<li>System property "<b>kubernetes.master</b>"</li>
<li>Environment variable "<b>KUBERNETES_MASTER</b>"</li>
<li>From "<b>.kube/config</b>" file inside user home.</li>
<li>Using DNS: "<b>https://kubernetes.default.svc</b>"</li>
</ol>
<li>Service account path "<b>/var/run/secrets/kubernetes.io/serviceaccount/</b>"</li>
</ul>
<div>
<br /></div>
<div>
More fine grained configuration can be provided by passing an instance of the <a href="https://raw.githubusercontent.com/fabric8io/kubernetes-client/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Config.java">Config</a> object.</div>
<br />
<script src="https://gist.github.com/7023de7cfbbf89ed7e35.js"></script>
<div>
<b>Client extensions and adapters</b></div>
<div>
<br /></div>
<div>
To support <a href="https://github.com/googlecloudplatform/kubernetes/" style="text-align: justify;">Kubernetes</a> extensions (e.g <a href="https://github.com/openshift/origin/">Openshift</a>) the client uses the notion of the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Extension.java">Extension</a> and the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/ExtensionAdapter.java">Adapter</a>. The idea is pretty simple. An extension client extends the <a href="https://raw.githubusercontent.com/fabric8io/kubernetes-client/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/DefaultKubernetesClient.java">default client</a> and implements the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Extension.java">Extension</a>. Each client instance can be adapted to the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Extension.java">Extension</a> as long as an <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/ExtensionAdapter.java">Adapter</a> can be found via Java's <a href="http://docs.oracle.com/javase/7/docs/api/java/util/ServiceLoader.html">ServiceLoader</a> <i>(forgive me father). </i></div>
<div>
<i><br /></i></div>
<div>
Here's an example of how to adapt any instance of the client to an instance of the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/openshift-client/src/main/java/io/fabric8/openshift/client/OpenShiftClient.java">OpenshiftClient</a>:</div>
<div>
</div>
<br />
<script src="https://gist.github.com/2730e14b3d30da53853a.js"></script>
The code above will work only if /oapi exists in the list of root paths returned by the Kubernetes Client (i.e. the client points to an open shift installation). If not it will throw an IllegalArugementException.
<b><br /></b>
<br />
<br />
In case the user is writing code that is bound to Openshift he can always directly instantiate an Instance of the <a href="https://raw.githubusercontent.com/fabric8io/kubernetes-client/master/openshift-client/src/main/java/io/fabric8/openshift/client/DefaultOpenshiftClient.java">default openshift client</a>.<br />
<br />
<script src="https://gist.github.com/df7197b96d17e87bbb5b.js"></script>
<b>Testing and Mocking</b><br />
<br />
<div style="text-align: justify;">
Mocking a client that is talking to an external system is a pretty common case. When the client is flat <i>(doesn't support method chaining) </i>mocking is trivial and there are tons of frameworks out there that can be used for the job. When using a DSL though, things get more complex and require a lot of boilerplate code to wire the pieces together. If the reason is not obvious, let's just say that with mocks you define the behaviour of the mock per method invocation. DSLs tend to have way more methods (with fewer arguments) compared to the equivalent Flat objects. That alone increases the work needed to define the behaviour. Moreover, those methods are chained together by returning intermediate objects, which means that they need to be mocked too, which further increases both the workload and the complexity. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
To remove all the boilerplate and make mocking the client pretty trivial to use we combined the DSL of the client, with the DSL of a mocking framework: <a href="https://github.com/easymock/easymock">EasyMock</a>. This means that the entry point to this DSL is the Kubernetes client DSL itself, but the terminal methods have been modified so that they return "Expectation Setters". An example should make this easier to comprehend. </div>
<br />
<br /></div>
<script src="https://gist.github.com/3dbb1b4d0f3def21ef66.js"></script>
The mocking framework can be easily combined with other <a href="https://github.com/fabric8io/fabric8">Fabric8</a> components, like the <a href="https://github.com/fabric8io/fabric8/tree/master/components/fabric8-cdi">CDI extension</a>. You just have to create @Produces method that returns the mock.<br />
<br />
Enjoy!Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-74848786226111306172015-06-17T06:13:00.000-07:002015-06-17T06:13:28.247-07:00Using Camel, CDI inside Kubernetes with Fabric8<b>Prologue</b><br />
<br />
I recently blogged about <a href="http://iocanel.blogspot.gr/2015/06/injecting-kubernetes-services-in-cdi.html">Injecting Kubernetes Services with CDI</a>. In this post I am going to take things one step further and bring <a href="http://camel.apache.org/">Apache Camel</a> into the picture. So, I am going to use <a href="http://camel.apache.org/cdi.html">Camel's CDI support</a> to wire my components and routes, along with <a href="http://fabric8.io/">Fabric8</a>'s CDI extension to automatically inject <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a> services into my components.<br />
<br />
I am going to reuse stuff from my previous post <i>(so give it a read if u haven't already)</i> to build an standalone camel cdi application that is going to expose the contents of a database via http <i>(a simple http to jdbc and back again)</i>. Everything will run in <a href="https://github.com/docker/docker">Docker</a> and orchestration will be done by <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a>.<br />
<br />
So first thing first. How camel and cdi works....<br />
<br />
<b>The camel cdi registry</b><br />
<b><br /></b>
<a href="http://camel.apache.org/">Apache Camel</a> is using the notion of a <a href="http://camel.apache.org/registry.html">registry</a>. It uses the registry to lookup for objects, that are needed by the routes. Those lookups may by type or by name.<br />
<br />
The most common use of the registry is when the <a href="http://camel.apache.org/uris.html">endpoint uri</a> is processed, camel will parse the scheme and will lookup the registry <b>by name</b> for the appropriate component. Other cases involve passing bean references to endpoints <b>by name </b>and so on...<br />
<br />
In other words <a href="http://camel.apache.org/">Apache Camel</a> may perform lookups on the bean registry on runtime.<br />
<br />
Any extension that needs to play nicely with <a href="http://camel.apache.org/">Apache Camel</a> needs to provide beans with a predictable names.<br />
<br />
<b>The @Alias annotation</b><br />
<br />
<a href="http://fabric8.io/">Fabric8</a>'s CDI extension, for any given service, may register more than one beans <i>(one per service per type, per protocol ...)</i>. So, it's <b><u>impossible</u></b> to have service beans named after the service. Also the user shouldn't have to memorise the naming conventions that are used internally...<br />
<br />
<i><u>"So, how does Fabric8 play with frameworks that rely on 'by name' lookups?"</u></i><br />
<i><u><br /></u></i>
<a href="http://fabric8.io/">Fabric8</a> provides the @<b>Alias</b> annotation which allows the developer to explicitly specify the bean name of the injected service. Here's an example:<br />
<br />
<script src="https://gist.github.com/544e13145037aece4b1e.js"></script>
<u>"What happens here?"</u><br />
The <a href="http://fabric8.io/">Fabric8</a> cdi extension will receive an event that there is an injection point of type String, with 2 qualifiers:<br />
<br />
<ol>
<li><b>ServiceName</b> with value "<u>mysql</u>".</li>
<li><b>Alias</b> with value "<u>mysqldb</u>".</li>
</ol>
<div>
So when it creates beans and producers for that service it will use the "mysqldb" as a name. This is what allows control over the <a href="http://fabric8.io/">Fabric8</a> managed beans and makes name lookups possible.</div>
<br />
<b>Using @Factory to create or configure Camel components or endpoints</b><br />
In my previous post, I went through some examples on how you could use <a href="http://fabric8.io/">Fabric8</a>'s @<b>Factory</b> annotation in order to create jdbc connections. Now, I am going to create a factory for a jdbc datasource, which then is going to be added to the <a href="http://camel.apache.org/">Apache Camel</a> <a href="https://github.com/apache/camel/blob/master/components/camel-cdi/src/main/java/org/apache/camel/cdi/CdiBeanRegistry.java">Cdi Bean Registry</a>.<br />
<br />
<script src="https://gist.github.com/678a440e343225be71df.js"></script>
Now if we wanted to refer this datasource from an <a href="http://camel.apache.org/">Apache Camel</a> endpoint, we would have to specify the "<b>name</b>" of the datasource to the endpoint uri. For example "<b><u>jdbc:custmersds</u></b>", where customersds is the name of the datasource.<br />
<br />
<i><u>"But, how can I name the fabric8 managed datasource?"</u></i><br />
<i><u><br /></u></i>
This is how the @Alias saves the day:<br />
<br />
<script src="https://gist.github.com/4d8cb3f91a81fdc16f4d.js"></script>
<br />
This is a typical RouteBuilder for CDI based Camel application. What is special about it is that we inject a DataSource named "customersds".<br />
<br />
<u>"Who provides the DataSource?"</u><br />
<u><br /></u>
<u><b>Short answer</b>:</u> <a href="http://fabric8.io/">Fabric8</a>.<br />
<br />
<u><b>Not so short answer</b>: </u>The @<b>ServiceName</b>("mysql") annotation tells <a href="http://fabric8.io/">Fabric8</a> that the DataSource refers to the "mysql" <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a> service. <a href="http://fabric8.io/">Fabric8</a> will obtain the url to that service for us. Since the type of the field is neither String, nor URL but DataSource, <a href="http://fabric8.io/">Fabric8</a> will lookup for @<b>Factory</b> methods that are capable of converting a String to a DataSource. In our case it will find the <u><a href="https://gist.github.com/iocanel/678a440e343225be71df">DataSourceFactory</a></u> class which does exactly that. As this was not awesome enough the <u><a href="https://gist.github.com/iocanel/678a440e343225be71df">DataSourceFactory</a></u> also accepts @<b>Configuration</b> <a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a>, so that we can specify things like database name, credentials etc (see my previous post).<br />
<br />
<b>Configuring the DataSource</b><br />
Before I start explaining how we can configure the DataSource, let me take one step back and recall <a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a> from my previous post:
<script src="https://gist.github.com/c0187b2bf15df289fd28.js"></script>
<br />
As I mentioned in my previous post we can use environment variables in order to pass configuration to our app. Remember this app is intended to live inside a Docker container....
<br />
<br />
<a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a> contains 3 fields:
<br />
<br />
<ol>
<li>Field<b> username</b> for environment variable <b>USERNAME</b></li>
<li>Field<b> password</b> for environment variable <b>PASSWORD</b></li>
<li>Field<b> databseName</b> for environmnet variable <b>DATABASE_NAME </b></li>
</ol>
<div>
So we need 3 environment variables one for each fields. Then our <a href="https://gist.github.com/iocanel/678a440e343225be71df" style="text-decoration: underline;">DataSourceFactory</a> will be passed an instance of <a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a> with whatever values can be retrieved from the environment, so that it create the actual DataSource.</div>
<div>
<br /></div>
<div>
<u> "But how could I reuse<a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28"> MysqlConfiguration</a> to configure multiple different services ?"</u></div>
<br />
<br />
So, the idea is that a @<b>Factory</b> and a @<b>Configuration </b>can be reusable. After all no need to have factories and model classes bound to the underlying services, right?<br />
<br />
<a href="http://fabric8.io/">Fabric8</a> helps by using the service name as a prefix for the environment variables. It does that on runtime and it works like this:<br />
<br />
<br />
<ol>
<li>The Fabric8 extension discovers an Injection Point annotated with @<b>ServiceName</b></li>
<li>It will check the target type and it will lookup for a @<b>Factory </b>if needed.</li>
<li>The @<b>Factory </b>accepts the service URL and an instance <a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a></li>
<li><a href="https://gist.github.com/iocanel/c0187b2bf15df289fd28">MysqlConfiguration</a> will be instantiated using the value of @<b>ServiceName </b>as an environment variable prefix.</li>
</ol>
So for our example to work we would need to package our application as a <a href="https://github.com/docker/docker">Docker</a> container and then use the following <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a> configuration:
<br />
<div>
<br /></div>
<script src="https://gist.github.com/8a49be1640fd39f8f973.js"></script>
<br />
<div>
<br /></div>
Now if we need to create an additional DataSource (say for a jdbc to jdbc bridge) inside the same container, we would have to just specify additional environment variable for the additional <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a>. Now, if the name of the service was "mysql-target", then our <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a> configuration would need to look like:<br />
<br />
<script src="https://gist.github.com/09e249649ed3dd61c195.js"></script>
... and we could use that by adding to our project an injection point with the qualifier @<b>ServiceName</b>("mysql-target").
<br />
<br />
You can find similar examples inside the <a href="https://github.com/fabric8io/quickstarts">Fabric8 quickstarts</a>. And more specifically the <a href="https://github.com/fabric8io/quickstarts/tree/master/quickstarts/java/camel-cdi-mq">camel-cdi-amq</a> quick start.<br />
<br />
<b>Stay tuned</b><br />
I hope you enjoyed it. There are going to be more related topics soon (including writing integration tests for Java application running on <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a>).<br />
<div>
<b><br /></b></div>
<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-66990889616717659632015-06-15T07:54:00.000-07:002015-06-15T08:16:25.054-07:00Injecting Kubernetes Services in CDI managed beans using Fabric8<b>Prologue</b><br />
<br />
The thing I love the most in <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> is the way services are discovered. Why?<br />
<br />
Mostly because the user code doesn't have to deal with registering, looking up services and also because there are no networking surprises <i>(if you've ever tried a registry based approach, you'll know what I am talking about)</i>.<br />
<br />
This post is going to cover how you can use <a href="http://fabric8.io/" target="_blank">Fabric8</a> in order to inject <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> services in Java using CDI.<br />
<br />
<b>Kubernetes Services</b><br />
<br />
Covering in-depth <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" target="_blank">Services</a> is beyond the scope of this post, but I'll try to give a very brief overview of them.<br />
<br />
In Kubernetes, applications are packaged as <a href="https://github.com/docker/docker" target="_blank">Docker</a> containers. Usually, it's a nice idea to split the application into individual pieces, so you will have multiple <a href="https://github.com/docker/docker" target="_blank">Docker</a> containers that most probably need communicate with each other. Some containers may be collocated together by placing them in the same <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md" target="_blank">Pod</a>, while others may be remote and need a way to talk to each other. This is where <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" target="_blank">Services</a> get in the picture.<br />
<br />
A container may bind to one or more ports providing one or more "services" to other containers. For example:<br />
<ul>
<li>A database server.</li>
<li>A message broker.</li>
<li>A rest service.</li>
</ul>
<br />
The question is <u>"<i>How other containers know how to access those services?</i>"</u><br />
<br />
So, <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> allows you to "label" each <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md" target="_blank">Pod</a> and use those labels to "select" <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md" target="_blank">Pods</a> that provide a logical service. Those labels are simple key, value pairs.<br />
<br />
Here's an example of how we can "label" a pod by specifying a label with key <b>name </b>and value <b>mysql</b>.
<br />
<script src="https://gist.github.com/390b4c4c76ccc68f9a42.js"></script>
And here's an example of how we can define a <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" target="_blank">Service</a> that exposes the mysql port. The service selector is using the key/value pair we specified above in order to define which are the pod(s) that provide the service.<br />
<br />
<script src="https://gist.github.com/7cf15dd9454593d3d0f9.js"></script>
The <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" target="_blank">Service</a> information passed to each container as environment variables by <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a>. For each container that gets created <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> will make sure that the appropriate environment variables will be passed for <b>ALL</b> services visible to the container.<br />
<br />
For the mysql service of the example above, the environment variables will be:<br />
<ul>
<li>MYSQL_SERVICE_HOST</li>
<li>MYSQL_SERVICE_PORT</li>
</ul>
<br />
<a href="http://fabric8.io/" target="_blank">Fabric8</a> provides a CDI extension which can be used in order to simplify development of <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> apps, by providing injection of <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> resources.<br />
<br />
<b>Getting started with the Fabric8 CDI extension</b><br />
To use the cdi extension the first step is to add the dependency to the project.
<script src="https://gist.github.com/80c46b34601837c1730f.js"></script>
Next step is to decide which service you want to inject to what field and then add a @ServiceName annotation to it.<br />
<script src="https://gist.github.com/39892d0a120fd846e5cc.js"></script>
In the example above we have a class that needs a JDBC connection to a mysql database that is available via <a href="https://github.com/googlecloudplatform/kubernetes" target="_blank">Kubernetes</a> <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md" target="_blank">Services</a>.<br />
<br />
The injected serivceUrl will have the form: [tcp|udp]://[host]<host>:[port]<port>. Which is a perfectly fine url, but its not a proper jdbc url. So we need a utility to convert that. This is the purpose of the <span style="background-color: white; color: #333333; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16px; white-space: pre;">toJdbcUrl. </span></port></host><br />
<br />
Even though its possible to specify the protocol when defining the service, one is only able to specify core transportation protocols such as TCP or UDP and not something like http, jdbc etc.<br />
<br />
<b>The @Protocol annotation</b><br />
<br />
Having to find and replace the "tcp" or "udp" values with the application protocol, is smelly and it gets old really fast. To remove that boilerplate <a href="http://fabric8.io/" target="_blank">Fabric8</a> provides the @Protocol annotation. This annotation allows you to select that application protocol that you want in your injected service url. In the previous example that is "jdbc:mysql". So the code could look like:<br />
<br />
<script src="https://gist.github.com/64f696edf56a0ec5e173.js"></script>
Undoubtably, this is much cleaner. Still it doesn't include information about the actual database or any parameters that are usually passed as part of the JDBC Url, so there is room for improvement here.<br />
<br />
One would expect that in the same spirit a @Path or a @Parameter annotations would be available, but both of these are things that belong to configuration data and are not a good fit for hardcoding into code. Moreover, the CDI extension of Fabric8 doesn't aspire to become a URL transformation framework. So, instead it takes things up a notch by allowing you to directly instantiate the client for accessing any given service and inject it into the source.<br />
<br />
<b>Creating clients for Services using the @Factory annotation</b><br />
<br />
In the previous example we saw how we could obtain the url for a service and create a JDBC connection with it. Any project that wants a JDBC connection can copy that snippet and it will work great, as long as the user remembers that he needs to set the actual database name.<br />
<br />
Wouldn't it be great, if instead of copying and pasting that snippet one could component-ise it and reuse it? Here's where the factory annotation kicks in. You can annotate with @Factory any method that accept as an argument a service url and returns an object created using the URL (e.g. a client to a service). So for the previous example we could have a MysqlConnectionFactory:<br />
<script src="https://gist.github.com/f8da976b456baaecac55.js"></script>
Then instead of injecting the URL one could directly inject the connection, as shown below.
<script src="https://gist.github.com/e7da3406df2d6f9dc8a0.js"></script>
<br />
What happens here?
<br />
<br />
When the CDI application starts, the Fabric8 extension will receive events about all annotated methods. It will track all available factories, so for for any non-String injection point annotated with @ServiceName, it will create a Producer that under the hood uses the matching @Factory.
<br />
<br />
In the example above first the MysqlConnectionFactory will get registered, and when Connection instance with the @ServiceName qualifier gets detected a Producer that delegates to the MysqlConnectionFactory will be created <i>(all qualifiers will be respected)</i>.<br />
<br />
This is awesome, but it is also <u>simplistic</u> too. Why?<br />
Because rarely a such a factory only requires a url to the service. In most cases other configuration parameters are required, like:<br />
<br />
<ul>
<li>Authentication information</li>
<li>Connection timeouts</li>
<li>more ....</li>
</ul>
<br />
<b>Using @Factory with @Configuration</b><br />
<br />
In the next section we are going to see factories that use configuration data. I am going to use the mysql jdbc example and add support for specifying configurable credentials. But before that I am going to ask a rhetorical question?<br />
<br />
<u>"How, can you configure a containerised application?" </u><br />
<br />
The shortest possible answer is "Using Environment Variables".<br />
<br />
So in this example I'll assume that the credentials are passed to the container that needs to access mysql using the following environment variables:<br />
<ul>
<li>MYSQL_USERNAME</li>
<li>MYSQL_PASSWORD</li>
</ul>
Now we need to see how our @Factory can use those.<br />
<br />
I've you wanted to use environment variables inside CDI in the past, chances are that you've used <a href="https://deltaspike.apache.org/" target="_blank">Apache DeltaSpike</a>. This project among other provides the @ConfigProperty annotation, which allows you to inject an environment variable into a CDI bean <i>(it does more than that actually)</i>.<br />
<br />
<br />
<script src="https://gist.github.com/c0187b2bf15df289fd28.js"></script>
This bean could be combined with the @Factory method, so that we can pass configuration to the factory itself.<br />
<br />
But what if we had <u>multiple</u> database servers, configured with a different set of credentials, or multiple databases? In this case we could use the service name as a prefix, and let <a href="http://fabric8.io/" target="_blank">Fabric8</a> figure out which environment variables it should look up for each @Configuration instance.<br />
<br />
<script src="https://gist.github.com/7a453fc44c7d7ffbfffe.js"></script>
<br />
Now, we have a reusable component that can be used with any mysql database running inside kubernetes and is fully configurable.<br />
<br />
There are additional features in the <a href="http://fabric8.io/" target="_blank">Fabric8</a> CDI extension, but since this post is already too long, they will be covered in future posts.<br />
<br />
Stay tuned.<br />
<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com6tag:blogger.com,1999:blog-1786615818482917324.post-27662158250990128582014-10-27T09:01:00.000-07:002014-10-27T09:02:58.846-07:00ZooKeeper on KubernetesThe last couple of weeks I've been playing around with <a href="https://github.com/docker/docker" target="_blank">docker</a> and <a href="https://github.com/GoogleCloudPlatform/kubernetes" target="_blank">kubernetes</a>. If you are not familiar with kubernetes let's just say for now that its an open source container cluster management implementation, which I find really really awesome.<br />
<br />
One of the first things I wanted to try out was running an <a href="http://zookeeper.apache.org/" target="_blank">Apache ZooKeeper</a> ensemble inside kubernetes and I thought that it would be nice to share the experience.<br />
<br />
For my experiments I used Docker v. 1.3.0 and <a href="https://github.com/openshift/origin" target="_blank">Openshift V3</a>, which I built from source and includes Kubernetes.<br />
<h2>
ZooKeeper on Docker</h2>
<div>
Managing a ZooKeeper ensemble is definitely not a trivial task. You usually need to configure an odd number of servers and all of the servers need to be aware of each other. This is a PITA on its own, but it gets even more painful when you are working with something as static as docker images. The main difficulty could be expressed as:</div>
<div>
<br /></div>
<div>
"<b><i>How can you create multiple containers out of the same image and have them point to each other?</i></b>"</div>
<div>
<br /></div>
<div>
One approach would be to use docker <a href="https://docs.docker.com/userguide/dockervolumes/" target="_blank">volumes</a> and provide the configuration externally. This would mean that you have created the configuration for each container, stored it somewhere in the docker host and then pass the configuration to each container as a volume at creation time.</div>
<div>
<br /></div>
<div>
I've never tried that myself, I can't tell if its a good or bad practice, I can see some benefits, but I can also see that this is something I am not really excited about. It could look like this:<br />
<br />
<script src="https://gist.github.com/0322acb0a16aaf63fdab.js">
</script>
</div>
<div>
<br /></div>
<div>
An other approach would be to pass all the required information as environment variables to the container at creation time and then create a wrapper script which will read the environment variables, modify the configuration files accordingly, launch zookeeper.</div>
<div>
<br /></div>
<div>
This is definitely easier to use, but its not that flexible to perform other types of tuning without rebuilding the image itself.</div>
<div>
<br /></div>
<div>
Last but not least one could combine the two approaches into one and do something like:</div>
<div>
<ul>
<li>Make it possible to provide the base configuration externally using volumes.</li>
<li>Use env and scripting to just configure the ensemble.</li>
</ul>
<div>
There are plenty of images out there that take one or the other approach. I am more fond of the environment variables approach and since I needed something that would follow some of the kubernetes conventions in terms of naming, I decided to hack an image of my own using the env variables way.<br />
<br />
<h2>
Creating a custom image for ZooKeeper</h2>
</div>
</div>
<div>
I will just focus on the configuration that is required for the ensemble. In order to configure a ZooKeeper ensemble, for each server one has to assign a numeric id and then add in its configuration an entry per zookeeper server, that contains the ip of the server, the peer port of the server and the election port.</div>
<div>
<br /></div>
<div>
The server id is added in a file called myid under the dataDir. The rest of the configuration looks like:</div>
<div>
<br />
<script src="https://gist.github.com/417999972f7eb4427e7a.js">
</script>
Note that if the server id is X the server.X entry needs to contain the bind ip and ports and not the connection ip and ports.<br />
<br />
So what we actually need to pass to the container as environment variables are the following:<br />
<br />
<br />
<ol>
<li>The server id.</li>
<li>For each server in the ensemble:</li>
<ol>
<li>The hostname or ip</li>
<li>The peer port</li>
<li>The election port</li>
</ol>
</ol>
<div>
<br /></div>
<div>
If these are set, then the script that updates the configuration could look like:</div>
<div>
<script src="https://gist.github.com/05ab0549b50f612058ed.js">
</script>
</div>
For simplicity the function that read the keys and values from env are excluded.
<br />
<div>
<br />
The complete image and helping scripts to launch zookeeper ensembles of variables size can be found in the <a href="https://github.com/fabric8io/fabric8-zookeeper-docker" target="_blank">fabric8io</a> repository.</div>
<br />
<div style="-webkit-text-stroke-width: 0px; color: black; font-family: Times; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px;">
</div>
<br />
<h2 style="-webkit-text-stroke-width: 0px; color: black; font-family: Times; font-style: normal; font-variant: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px;">
ZooKeeper on Kubernetes</h2>
<div>
The docker image above, can be used directly with docker, provided that you take care of the environment variables. Now I am going to describe how this image can be used with kubernetes. But first a little rambling...</div>
<div>
<br /></div>
<div>
What I really like about using kubernetes with ZooKeeper, is that kubernetes will recreate the container, if it dies or the health check fails. For ZooKeeper this also means that if a container that hosts an ensemble server dies, it will get replaced by a new one. This guarantees that there will be constantly a quorum of ZooKeeper servers.</div>
<div>
<br /></div>
<div>
I also like that you don't need to worry about the connection string that the clients will use, if containers come and go. You can use kubernetes services to load balance across all the available servers and you can even expose that outside of kubernetes. </div>
<div>
<br /></div>
<div>
<h2>
Creating a Kubernetes confing for ZooKeeper</h2>
</div>
<div>
I'll try to explain how you can create 3 ZooKeeper Server Ensemble in Kubernetes. </div>
<div>
<br /></div>
<div>
What we need is 3 docker containers all running ZooKeeper with the right environment variables:</div>
<div>
<script src="https://gist.github.com/6234b8043bc0d76d82a5.js">
</script>
The env needs to specify all the parameters discussed previously.<br />
<br />
So we need to add along with the ZK_SERVER_ID, the following:<br />
<br />
<ul>
<li>ZK_PEER_1_SERVICE_HOST</li>
<li>ZK_PEER_1_SERVICE_PORT</li>
<li>ZK_ELECTION_1_SERVICE_PORT</li>
<li>ZK_PEER_2_SERVICE_HOST</li>
<li>ZK_PEER_2_SERVICE_PORT</li>
<li>ZK_ELECTION_2_SERVICE_PORT</li>
<li>ZK_PEER_3_SERVICE_HOST</li>
<li>ZK_PEER_3_SERVICE_PORT</li>
<li>ZK_ELECTION_3_SERVICE_PORT</li>
</ul>
</div>
<div>
An alternative approach could be instead of adding all these manual configuration, to expose peer and election as kubernetes services. I tend to favor the later approach as it can make things simpler when working with multiple hosts. It's also a nice exercise for learning kubernetes.<br />
<br />
So how do we configure those services?<br />
<br />
To configure them we need to know:<br />
<br />
<ul>
<li>the name of the port</li>
<li>the kubernetes <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md" target="_blank">pod</a> the provide the service</li>
</ul>
<div>
The name of the port is already defined in the previous snippet. So we just need to find out how to select the pod. For this use case, it make sense to have a different pod for each zookeeper server container. So we just need to have a label for each pod, the designates that its a zookeeper server pod and also a label that designates the zookeeper server id.</div>
<script src="https://gist.github.com/bf0caf96914119e46c0d.js">
</script>
Something like the above could work. Now we are ready to define the service. I will just show how we can expose the peer port of server with id 1, as a service. The rest can be done in a similar fashion:<br />
<br />
<script src="https://gist.github.com/60170ee21fa8554a78bd.js">
</script>
The basic idea is that in the service definition, you create a selector which can be used to query/filter pods. Then you define the name of the port to expose and this is pretty much it. Just to clarify, we need a service definition just like the one above per zookeeper server container. And of course we need to do the same for the election port.<br />
<br />
Finally, we can define an other kind of service, for the client connection port. This time we are not going to specify the sever id, in the selector, which means that all 3 servers will be selected. In this case kubernetes will load balance across all ZooKeeper servers. Since ZooKeeper provides a single system image (it doesn't matter on which server you are connected) then this is pretty handy.<br />
<br />
<br />
<script src="https://gist.github.com/c3e8b27b0e4fc956b19d.js">
</script>
I hope you found it useful. There is definitely room for improvement so feel free to leave comments.
<br />
<br />
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com9tag:blogger.com,1999:blog-1786615818482917324.post-55261795343331254382014-04-25T04:25:00.000-07:002014-04-25T04:26:10.893-07:00Learning Apache Karaf<b>Prologue</b><br />
<br />
During my easter vacation I had some spare time and I decided to take a closer look at <a href="http://www.packtpub.com/learning-apache-karaf/book" target="_blank">Learning Apache Karaf</a> by Jamie Goodyear, Johan Edstorm and Heith Kesler. The team of authors is well known to the <a href="http://karaf.apache.org/" target="_blank">Apache Karaf</a> community, so I had no doubt that it would worth my while.<br />
<br />
<b>A first glance at "Learning Apache Karaf" </b><br />
<br />
I was expecting a smaller in size book, maybe something like a "starter" but the book is a 100 page book and it covers everything that a user needs in order to get up to speed with Karaf. On top of that it gets into details and can serve as a reference even for the more advanced users.<br />
<br />
<b>Diving deeper into "Learning Apache Karaf"</b><br />
<br />
I found the book really pleasant to read. Even though I am an <a href="http://karaf.apache.org/" target="_blank">Apache Karaf</a> committer myself and pretty familiar with its internals, the book kept my interest and made a really smooth read. The examples, the diagrams etc where "<i>straight to the point</i>" and the structure is such that it makes it really easy to find the topic that you are after.<br />
<br />
<b>Overall</b><br />
A really great book, for a really great project. Thumbs up!Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-65492573069401945342014-03-24T13:04:00.001-07:002014-03-24T13:05:36.123-07:00DevNation 2014<b>CamelOne + Judcon + Red Hat Connect Developer Exchange = DevNation</b><br />
<b><br /></b>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8vCahuBphjQS0PFvWrVrnckr5f8vAZnxDgWnyZmXD8_n71kOCR0vg-i0C-IpyDM4AkylQG7-ZEvoofWl5FUSfzGsnHndwNXDIIugwC09ojsHjtUOPQS7TeJAqgiM_DzhAad3zAa_7UgE/s1600/devnation_1180x70_banner_together.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8vCahuBphjQS0PFvWrVrnckr5f8vAZnxDgWnyZmXD8_n71kOCR0vg-i0C-IpyDM4AkylQG7-ZEvoofWl5FUSfzGsnHndwNXDIIugwC09ojsHjtUOPQS7TeJAqgiM_DzhAad3zAa_7UgE/s1600/devnation_1180x70_banner_together.png" height="91" width="640" /></a></div>
<b><br /></b>
<b><br /></b>
Starting from 2014 CamelOne will be part of a wider open source conference: <a href="http://www.devnation.org/" target="_blank">DevNation</a>.<br />
<br />
<a href="http://www.devnation.org/" target="_blank">DevNation</a> 2014 will take place in San Fransisco on April 13 - 17.<br />
<br />
<b>Fabric8 @ DevNation</b><br />
<br />
This year I'll be giving two talks at DevNation both related to <a href="http://fabric8.io/#/welcome" target="_blank">Fabric8</a>.<br />
<br />
The first talk is going to be an introduction to Fabric8 and it will aim at people that want to know what Fabric8 is all about, the problems that it tries to solve, its architecture and its awesome features.<br />
<br />
In the second talk we are going to dive deeper into Fabric8 and discuss on how you can code, provision and manage your application with Fabric8 in all different environments (private, public clouds, PaaS etc).<br />
<br />
With the first community release of Fabric8 as an individual project right at the door, it's going to be a fantastic opportunity to learn about all the new stuff included in the release <i>(and of course to have a beer or two)</i>.<br />
<br />
<br />
See you there !!!<br />
<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-57434470847561959712013-11-26T02:15:00.001-08:002013-11-26T02:15:46.918-08:00Thoughts on Blueprint and Declarative Services: Dependency injection or Dependency managementI've been using the <a href="http://wiki.osgi.org/wiki/Blueprint">OSGi Blueprint</a> for a couple of years now and I have been happy with it. Blueprint is the obvious choice inside <a href="http://karaf.apache.org/">Apache Karaf</a> and since it was a solution that is generally working well I never needed to look for alternatives.<br />
<br />
Earlier this year I had the chance to watch a presentation by <a href="http://sully6768.blogspot.com/">Scott England Sulivan</a> which included a demo of a small project using <a href="http://wiki.osgi.org/wiki/Declarative_Services">OSGi Declarative Services</a> which made think that I should start looking closer at <a href="http://wiki.osgi.org/wiki/Declarative_Services">OSGi Declarative Services</a>.<br />
<br />
So here are some thoughts about the two different approaches for dependency injection/management in OSGi.<br />
<br />
<b>Blueprint</b><br />
Blueprint is a dependency injection solution for OSGi. It is almost identical to the <a href="http://spring.io/">Spring Framework</a> with extra support for OSGi Services <i>(in fact it was inspired by the Spring Framework)</i>. Its resemblance to Spring makes it dead simple to use, especially if you are already familiar with Spring.<br />
<br />
Blueprint handles for you injection of OSGi services taking the availability of the services into consideration. When using services that are considered optional a proxy for that service will be created and injected. Calls to that service will block until the service becomes available or timeout.<br />
<br />
<b>Declarative Services</b><br />
Declarative Services is a component model that simplifies the creation of components that publish or consume OSGi services. I don't consider Declarative Services a dependency injection solution, as it is more like a component model with dependency management capabilities.<br />
<br />
In Declarative Services you define component and its dependencies in a declarative way and the framework, will manage the lifecycle of the component based on wether its dependencies are satisfied or not. This means that a component will only get activated when all the dependencies are satisfied and will deactivate whenever a dependency is gone. So it is<b> 100%</b> free of proxies, but still guarantees that as long as a component is active, its internal dependencies will be reachable.<br />
<br />
<b>Proxy-ing vs Cascading</b><br />
One of the main differences between the two approaches in questions is that Blueprint uses Proxies and Declarative Services uses a cascading approach <i>(activate / deactivate components based on the dependency availability)</i>. I tend to prefer cascading instead of proxy-ing <i>(not only because proxies are out of fashion but ...)</i>, mostly because when using proxies you have no clue of what is that state/availability of the underlying object. Cascading on the other hand seems to be more fitting inside OSGi, as it handles better the dynamic nature of the framework.<br />
<br />
A typical case where proxies cause head aches is when you need to use a reference listener for an optional service. If you need to react on the service object when it goes away (unbinds) it will just fail.<br />
<br />
An other issue with proxies is that they will allow you to publish services that their dependencies are not satisfied yet. Calls the the service may end up hung, because the proxy is waiting for the unsatisfied dependency to become available. This prevents you from failing fast and not being able to fail fast can even compromise your entire system<i> (if its not obvious why you may want to have a read at the excellent book <a href="http://www.goodreads.com/book/show/1069827.Release_It_">Release It!</a>)</i>.<br />
<br />
<b>An example</b><br />
Let's examine closer the approaches above using a close to real world example. Let's assume that we are building a simple CRUD web application for Items.<br />
<b><br /></b>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQdcMC05IX2zvSIlCl9eewqFoiasK_qChkrdNKZ6AqS3oHjfWV-Gm-Gi4pFyhi2UogF2mzTCM1ki4YwXj3HEzytqubjd7vzZ7AspbkzBN_WRUvOuBzs5eDWxIfbkgVa7Z_P80jeLM_iIk/s1600/diagram.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="96" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQdcMC05IX2zvSIlCl9eewqFoiasK_qChkrdNKZ6AqS3oHjfWV-Gm-Gi4pFyhi2UogF2mzTCM1ki4YwXj3HEzytqubjd7vzZ7AspbkzBN_WRUvOuBzs5eDWxIfbkgVa7Z_P80jeLM_iIk/s640/diagram.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A simple CRUD application for "Items"</td></tr>
</tbody></table>
<b><br /></b>
The application can be composed of the following parts:<br />
<br />
<ul>
<li>The presentation layer</li>
<li>The item service</li>
<li>The data store</li>
<li>The database</li>
</ul>
<div>
In OSGi the presentation layer could be a servlet registered in the <a href="http://www.osgi.org/javadoc/r4v42/org/osgi/service/http/HttpService.html">Http Service</a>, The Item Service could be an OSGi service that encapsulates the logic and the DataStore could also be an OSGi service that can be used for interacting with the database.</div>
<div>
<br /></div>
<div>
As shown in the diagram above, the web application depends on the item service and the item service depends on the datastore. </div>
<div>
<br /></div>
<div>
<b><u>What happens if the datastore is not configured or available? </u></b><br />
<b><u><br /></u></b>
<b>Proxying:</b><br />
<b><br /></b>
With the proxying approach the Item Service will get injected a proxy to the Data Store, that will block if the Data Store is not available. The Item Service will be registered to the Service Registry and the Web Application will try to use it, even if there is no Data Store. Requests to the Web Application may end up blocked waiting for the Data Store <i>(which is not ideal)</i>.<br />
<br />
<b>Cascading:</b><br />
<br />
With the cascading approach the Item Service will only be registered when a Data Store is present. In the same manner the Web Application will only be available if the Item Service is available. So we have a guarantee, that the Web Application will be available if all dependencies in the hierarchy are satisfied. Requests made while there are unsatisfied dependencies will result in an HTTP 404 error which gives a "fail fast" behaviour. Note that whenever the Data Store becomes available both the Item Service and Web Application will automatically detect the change and also become available themselves.<br />
<br />
<br />
<b>Final Thoughts</b><br />
<br />
Declarative Services is a great tool for managing the dependencies of your components. The cascading approach can be a valuable tool for building robust dynamic, modular applications.<br />
<br />
Blueprint is really easy to use and will feel natural to users familiar with spring. Also the proxying behaviour in some cases can be also useful.<br />
<br />
So when to use one or the other?<br />
<br />
I tend to think that Blueprint shines in cases, where you are constructing components that either have no service dependencies or are not exposed themselves as services. An other case is when "waiting for the service" is more natural. Such examples can be:<br />
<br />
<ol>
<li>shell commands <i>(usually the are not used as services by other components)</i></li>
<li>camel routes <i>(waiting for dependencies is desired)</i></li>
</ol>
<div>
In cases where there are long dependency chains, components are highly dynamic etc I think that declarative services is a better choice.</div>
<div>
<br /></div>
<div>
Regardless of your choice, you need to know the strengths and weaknesses of your tools!</div>
<div>
<br /></div>
<div>
I hope that this was helpful !!!</div>
</div>
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com6tag:blogger.com,1999:blog-1786615818482917324.post-91889012003156528142013-07-05T08:06:00.000-07:002013-07-07T14:42:47.385-07:00Hawtio & Apache Jclouds<b>Introduction</b><br />
<b><br /></b>I've spent some time lately working on an <a href="http://jclouds.incubator.apache.org/">Apache Jclouds</a> plugin for <a href="http://hawt.io/">Hawtio</a>. While there is still a lot of pending work, I couldn't hold my excitement and wanted to share...<br />
<br />
<b>What is this Hawtio anyway?</b><br />
<br />
Whenever a cool open source project is brought to my attention, I usually subscribe to the mailing lists so that I can get a better feel of the projects progress, direction etc. Sooner or later there is always an email with the topic "<b>[Discuss] - Webconsole for our cool project</b>". <br />
<br />
Such emails quite often end up in a long discussion about what's the best web framework to use, what should be the target platform and how the console could be integrated with upstream/downstream projects.<br />
<br />
A very good example is <a href="http://servicemix.apache.org/">Apache ServiceMix</a>. ServiceMix runs on <a href="http://karaf.apache.org/">Apache Karaf</a>, which runs on <a href="http://felix.apache.org/">Apache Felix</a> and also embeds <a href="http://activemq.apache.org/">Apache ActiveMQ</a> and every single of those projects has its own webconsole.<br />
<br />
The number of consoles grows so big, that users have to hire a personal assistance just to keep track of the URLs of each web console. Well, maybe that's an overstatement but you get the idea. And if we also take into consideration that some projects are bound to a specific runtime, while others are not then we have a perfect webconsole storm.<br />
<br />
<a href="http://hawt.io/">Hawtio</a> solves this problem, by providing a lightweight HTML5 modular web console with tons of plugins. <a href="http://hawt.io/">Hawtio</a> can run everywhere as its not bound to a specific runtime, and its modular, which means that its pretty easy to write and hook your own plugins.<br />
<br />
<b>Writing plugins for Hawtio</b><br />
<br />
<a href="http://hawt.io/">Hawtio</a> is a full client framework. Whenever it requires communication with the backend it can use rest. To make things easier it also use <a href="http://www.jolokia.org/">Jolokia</a> that exposes JMX with JSON over HTTP. This makes it pretty easy to hook frameworks even if they don't already provide a rest interface, but expose things over JMX.<br />
<br />
Once, the communication with the backend is sorted, its pretty easy to create a plugin. <a href="http://hawt.io/">Hawtio</a> uses <a href="http://angularjs.org/">AngularJS</a>, which makes development of webapps a real pleasure.<br />
<br />
<b>The Jclouds plugin</b><br />
<br />
<a href="http://jclouds.incubator.apache.org/">Apache Jclouds</a> doesn't have yet a rest interface, nor it has JMX support. Well actually it has pluggable JMX support as of 1.6.1-incubating release. All you need to do is to create a <a href="http://jclouds.incubator.apache.org/">Apache Jclouds</a> Context using the ManagementLifecycle module:<br />
<script src="https://gist.github.com/5933765.js"></script>
<br />
<br />
<br />
<b>Note: </b>Users that are using the jclouds-karaf project, will get that for free (no need to do anything at all).<br />
<br />
When the ManagementLifecycle plugin is used, it will create and register <a href="http://jclouds.incubator.apache.org/">Apache Jclouds</a> MBeans to JMX. If those mbeans are discovered by <a href="http://hawt.io/">Hawtio</a>, a new tab will be added in the <a href="http://hawt.io/">Hawtio</a> user interface:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOynN8xr4Lh3c3F7adoLEdFd6F8TbGkOnO98Qzo-ChaUPIsN23T4xFqjAaAQzLO2vaGQzOEYqOvXMGxIot6CyImeaYLTvH1SMQmp9xLJMQ6cJGiiHRKxYmkzqn3LzHfzz6tnBVY8AYA6g/s1301/Screen+Shot+2013-07-05+at+1.06.05+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="165" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOynN8xr4Lh3c3F7adoLEdFd6F8TbGkOnO98Qzo-ChaUPIsN23T4xFqjAaAQzLO2vaGQzOEYqOvXMGxIot6CyImeaYLTvH1SMQmp9xLJMQ6cJGiiHRKxYmkzqn3LzHfzz6tnBVY8AYA6g/s640/Screen+Shot+2013-07-05+at+1.06.05+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The main Jclouds plugin Page</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7PmdSR1mCbfIVLmRl4HNUPSGbUcnK_IYY5WTG7EyD9yDDt89UrU-TkFhyphenhyphensK2RpxjyZzRfjHrU0jTBnAjqZy4bsELRGId6udMjisjTU_1Wmvm0M2tCXtk-HO5A19N8B0iEmIGSGmtOqVE/s1309/Screen+Shot+2013-07-05+at+1.06.58+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7PmdSR1mCbfIVLmRl4HNUPSGbUcnK_IYY5WTG7EyD9yDDt89UrU-TkFhyphenhyphensK2RpxjyZzRfjHrU0jTBnAjqZy4bsELRGId6udMjisjTU_1Wmvm0M2tCXtk-HO5A19N8B0iEmIGSGmtOqVE/s640/Screen+Shot+2013-07-05+at+1.06.58+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-small;">The EC2 Api details page</span></div>
<br />
From there the user is able to browse all installed <a href="http://jclouds.incubator.apache.org/">Apache Jclouds</a> providers, apis and services. If for example, you have created a compute service context with the MangementLifecycle module, you'll be able to see in it under the "<b>Compute Service</b>" tab:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1XEPKxTTrQoC0RrOXz2QATugGOsopLlFSciY0tz51255a3CeWNg99pSipfjTSoyo4tbUHS-N5HSJEN2q7rLAf2yDJueZE_l9o6T5ZkGCv3hjoN_hHpz5aj3Ehs36z3IMZrlDadxCHMyg/s1304/Screen+Shot+2013-07-05+at+1.07.33+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="83" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1XEPKxTTrQoC0RrOXz2QATugGOsopLlFSciY0tz51255a3CeWNg99pSipfjTSoyo4tbUHS-N5HSJEN2q7rLAf2yDJueZE_l9o6T5ZkGCv3hjoN_hHpz5aj3Ehs36z3IMZrlDadxCHMyg/s640/Screen+Shot+2013-07-05+at+1.07.33+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">List of Compute Services - Amazon AWS & A Stub service.</td></tr>
</tbody></table>
<br />
By selecting one of the available services, a details bar appears, which helps you navigate to all service specific tabs. For a compute service its:<br />
<br />
<u>Nodes</u><br />
<br />
A detailed list of all running nodes, with the ability to reboot, destroy, suspend & resume a node.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2cqhtxnvsTXKtnCQdzgwv0jkG_mUBXGyBEU099DbsyFwls0QPdvLNev3-AEbrPr3G43PWl7B4gc0wwkaa_YlXKhw_2yiLpSNXlPH0maNBxJoZjJHrv1zv-pvbCYoBGi08UAB_OS5Xn44/s1308/Screen+Shot+2013-07-05+at+1.17.01+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2cqhtxnvsTXKtnCQdzgwv0jkG_mUBXGyBEU099DbsyFwls0QPdvLNev3-AEbrPr3G43PWl7B4gc0wwkaa_YlXKhw_2yiLpSNXlPH0maNBxJoZjJHrv1zv-pvbCYoBGi08UAB_OS5Xn44/s640/Screen+Shot+2013-07-05+at+1.17.01+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Nodes</td></tr>
</tbody></table>
<u>Images</u><br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; text-align: center;"><tbody>
<tr><td style="text-align: center;"><div style="text-align: left;">
A list of images, with an operating system filter.</div>
<div style="text-align: left;">
<br /></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirFVA090Z5xzJ7uE6wtXXEtJ_YtACk4stNhVaj-j3au3OehyphenhyphennVx3PWC93k1q4JOBbzzpF6bTqBEKElVF1NTS5f0S07jX5vWF4rZk57nvcm7l9XqyPokL9NRMbPwr01xTGfrJVuoWCu1-k/s1307/Screen+Shot+2013-07-05+at+1.11.54+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirFVA090Z5xzJ7uE6wtXXEtJ_YtACk4stNhVaj-j3au3OehyphenhyphennVx3PWC93k1q4JOBbzzpF6bTqBEKElVF1NTS5f0S07jX5vWF4rZk57nvcm7l9XqyPokL9NRMbPwr01xTGfrJVuoWCu1-k/s640/Screen+Shot+2013-07-05+at+1.11.54+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Images</td></tr>
</tbody></table>
<br />
<br />
<div>
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<u>Locations</u></div>
<div class="separator" style="clear: both; text-align: left;">
A list of all assignable locations</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWkBaUna0inDy0uH856v3us-qZJCdQpYbnFyQc8T6P-aJhEBeBZvLxPLkE0WTM9t14-K_73k83ImgsJj6domAjqTSck5amVpli_OnMuluWyR34AOEX3NsHLJAaRcUuCxbxuWcE29AnX0o/s1314/Screen+Shot+2013-07-05+at+1.09.45+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWkBaUna0inDy0uH856v3us-qZJCdQpYbnFyQc8T6P-aJhEBeBZvLxPLkE0WTM9t14-K_73k83ImgsJj6domAjqTSck5amVpli_OnMuluWyR34AOEX3NsHLJAaRcUuCxbxuWcE29AnX0o/s640/Screen+Shot+2013-07-05+at+1.09.45+PM.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
The plugin is not compute service specific. It also supports blobstores. For example, here a view of one of my S3 buckets:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9YVxZwO7UXGSI_l_DxV7KKM37RllZDxcjy-5MGrnlsCukmZ7g0nt_yP4tZrNAdqdczwN9oI55r-DWlGVeyXY6N2GzBgnqYaGrcpq6PYZDd_a5peCT5br0-nls_YOsRm37B8aqa09slxg/s1315/Screen+Shot+2013-07-05+at+1.18.47+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="148" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9YVxZwO7UXGSI_l_DxV7KKM37RllZDxcjy-5MGrnlsCukmZ7g0nt_yP4tZrNAdqdczwN9oI55r-DWlGVeyXY6N2GzBgnqYaGrcpq6PYZDd_a5peCT5br0-nls_YOsRm37B8aqa09slxg/s640/Screen+Shot+2013-07-05+at+1.18.47+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A blobstore browser</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<span style="text-align: -webkit-auto;"><b>Mix & match</b></span></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
What I really love about <a href="http://hawt.io/" style="text-align: -webkit-auto;">Hawtio</a> is that has a wide range of out of the box <a href="http://hawt.io/plugins/index.html">plugins</a> that you can mix & match. Here's an example:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
"<i>A couple of years ago I created an example of using Jclouds with <a href="http://camel.apache.org/">Apache Camel</a> to automatically send email notifications about running instances in the cloud.</i>"</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Hawtio also provides an <i><a href="http://camel.apache.org/">Apache Camel</a></i> plugin, so we can visually see, edit or modify the example that sends the notifications. The great thing is that in this example we are using a <a href="http://hawt.io/plugins/logs/">Hawtio</a> managed compute service:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: -webkit-auto;">
The original example can be found at <a href="http://iocanel.blogspot.gr/2011/11/cloud-notifications-with-apache-camel.html">Cloud Notification with Apache Camel</a>.</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE01n9dItFYZgHt8f8u_sOW6SLL_K0vXHpb5QUWG7cimp20UlOcXzp0VH8eKjrW5RCKfDPByeEtt-0AeN1W739Jpdi0HZA46CITnV-obO-cpojlXf9qEBtXDsJKoPDQSDFw4hRtpMdugM/s1600/Screen+Shot+2013-07-05+at+5.12.42+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="412" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE01n9dItFYZgHt8f8u_sOW6SLL_K0vXHpb5QUWG7cimp20UlOcXzp0VH8eKjrW5RCKfDPByeEtt-0AeN1W739Jpdi0HZA46CITnV-obO-cpojlXf9qEBtXDsJKoPDQSDFw4hRtpMdugM/s640/Screen+Shot+2013-07-05+at+5.12.42+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Visual representation of a route that polls EC2 for running instances and sends email notifications</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: left;">
<b style="text-align: -webkit-auto;"><br /></b></div>
An other cool plugin that can be used along with the jclouds plugin is the <a href="http://hawt.io/plugins/logs/">"Logs plugin"</a>, The log plugin let's you search, browse and filter your logs and even see the source that is associated with the log entries:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXBAwZl-oQO0rowqK4TQml0Wr8lLWtGt_gSwrfMVuA_Xo1AU94gfECWYM2w_74-QZD2OQR5Dyxwc3yhIkmwuBU4my35AISacVjkCi5yc0Poz_W79I_FArkBA8jlyuWmHooa01txR3kXXk/s928/Screen+Shot+2013-07-05+at+5.31.35+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXBAwZl-oQO0rowqK4TQml0Wr8lLWtGt_gSwrfMVuA_Xo1AU94gfECWYM2w_74-QZD2OQR5Dyxwc3yhIkmwuBU4my35AISacVjkCi5yc0Poz_W79I_FArkBA8jlyuWmHooa01txR3kXXk/s640/Screen+Shot+2013-07-05+at+5.31.35+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Searching the logs for jclouds related errors</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSSumrfzt8eIWK2hYq8DcEXo9ZubXS4ucSHHTkVdZIb7EMWSokky0eefjU-kD2euxv4or6XW0gyRyHA8DRVeY6iprnF9X2Amn1CmGRS7MgVwKnaEHzmj4d-26B7J774K4L_Ms6skd050M/s926/Screen+Shot+2013-07-05+at+5.32.19+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSSumrfzt8eIWK2hYq8DcEXo9ZubXS4ucSHHTkVdZIb7EMWSokky0eefjU-kD2euxv4or6XW0gyRyHA8DRVeY6iprnF9X2Amn1CmGRS7MgVwKnaEHzmj4d-26B7J774K4L_Ms6skd050M/s640/Screen+Shot+2013-07-05+at+5.32.19+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The code that generated the log entry</td></tr>
</tbody></table>
<b>Epilogue</b><br />
<br />
This is only a first draft of the jclouds plugin and there are more cool things to be added, like executing scripts, downloading blobs and also have a better way of creating new services (the last is already supported but could be really improved).<br />
<br />
If you want to see more of <a href="http://hawt.io/plugins/logs/">Hawtio</a> in action you can have a look at James Strachan's demonstation of a <a href="http://vimeo.com/68442425">Camel based iPaas</a> which is basically <a href="http://hawt.io/plugins/logs/">Hawtio</a> + <a href="http://fuse.fusesource.org/fabric/">Fuse Fabric</a> + <i style="text-align: left;"><a href="http://camel.apache.org/">Apache Camel</a></i>.<br />
<br />
<b><br /></b>
<b><br /></b>
<br />
<b><br /></b>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com2tag:blogger.com,1999:blog-1786615818482917324.post-82400653346933066832013-05-11T03:23:00.001-07:002013-05-11T03:24:21.372-07:00Advanced Integration Testing using Fuse Fabric at Camel One 2013<div class="separator" style="clear: both; text-align: left;">
I've already posted <a href="http://iocanel.blogspot.gr/2012/01/advanced-integration-testing-with-pax.html">twice</a> in the blog about integration testing using <a href="https://ops4j1.jira.com/wiki/display/paxexam/Pax+Exam">Pax-Exam</a> and <a href="http://karaf.apache.org/">Karaf</a>. Surprisingly these two posts are among the most popular in this blog and I was considering writing a third part that would focus on <a href="http://fuse.fusesource.org/fabric/">Fuse Fabric</a>.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The idea was to write about using <a href="http://fuse.fusesource.org/fabric/">Fuse Fabric</a> to create and manage distributed containers for your integration tests. </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Finally, I decided give a presentation about this instead.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
So if you are joining <a href="http://www.camelone.com/">CamelOne 2013</a>, you'll have the chance to attend to my presentation and learn more about:</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<ul>
<li>Writing integration tests for OSGi applications.</li>
<li>Using <a href="http://fuse.fusesource.org/fabric/">Fuse Fabric</a> to manage & coordinate distributed tests.</li>
<li>Scaling your tests to a large number of remote containers.</li>
</ul>
<br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjy7Fjk1YFKj2zxrPMXONYRacOUIYA7aJQU3qCcRqAsIhmaslLO37kKKq3fd17h83GiHpk1hfJvetmIHGmw8EH1a7jz_PLxCoJp4eMM6nZ9bsJ36mX2zlx5BEsZz6JGon59Yc46EcdATio/s1600/ImSpeakingAtCamelOne.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjy7Fjk1YFKj2zxrPMXONYRacOUIYA7aJQU3qCcRqAsIhmaslLO37kKKq3fd17h83GiHpk1hfJvetmIHGmw8EH1a7jz_PLxCoJp4eMM6nZ9bsJ36mX2zlx5BEsZz6JGon59Yc46EcdATio/s400/ImSpeakingAtCamelOne.jpg" width="400" /></a></div>
<br />Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-32094639788337030202012-09-18T22:24:00.000-07:002012-09-18T22:27:02.668-07:00A command line interface for jclouds<b>Prologue</b><br />
<b><br /></b>
I've been using and contributing to jclouds for over a year now. So far I've used it extensively in many areas and especially in the <a href="https://github.com/fusesource/fuse">Fuse Ecosystem</a>. In all its awesomeness it was lacking one thing, a tool which you can use to manage any cloud provider that jclouds provides access too. Something like the EC2 command like tool, but with the jclouds coolness. A common tool through which you would be able to manage EC2, Rackspace, Opesntack, CloudStack ... you name it.<br />
<br />
I am really glad that now there is such a tool and its first release is at the corner.<br />
<br />
So this post is an introduction to the new jclouds cli, which comes in two flavors:<br />
<br />
<ol>
<li><b>Interactive mode (shell)</b></li>
<li><b>Non interactive mode (cli)</b></li>
</ol>
<div>
<br /></div>
<br />
<b>A little bit of history</b><br />
<b><br /></b>
Being a <a href="http://karaf.apache.org/">Karaf</a> committer, one of the first things I did around jclouds is to work on its OSGi support. The second thing was to work on jclouds integration for <a href="http://karaf.apache.org/">Apache Karaf</a>. So I worked on a project that made it really easy to install jclouds on top of Karaf and added the first basic commands around blob stores and the <a href="https://github.com/jclouds/jclouds-karaf">Jclouds Karaf</a> project started to take shape. At the same time a friend and colleague of mine, <a href="http://gnodet.blogspot.com/">Guillaume Nodet</a> had started a similar work, which he contributed to <a href="https://github.com/jclouds/jclouds-karaf">Jclouds Karaf</a>. This project now support most of jclouds operations, provides rich completion support which makes it really fast and easy to use.<br />
<br />
Of course, this integration project is mostly targeting people that are familiar with OSGi and <a href="http://karaf.apache.org/">Apache Karaf</a> and cannot be considered a general purpose tool, like the one I was dreaming about in the prologue.<br />
<br />
A couple of months ago, <a href="http://twitter.com/abayer">Andrew Bayer</a> started considering building a general purpose jclouds cli. Then, it strike me: "<i>why don't we reuse the work that has been done on <a href="https://github.com/jclouds/jclouds-karaf">Jclouds Karaf</a> to build a general purpose cli?</i>"<br />
<br />
One of the great things about <a href="http://karaf.apache.org/">Apache Karaf</a> is that it is easily branded and due to its modular foundation, you can pretty easily add/remove bits in order to create your own distribution. On top of that it allows you discover and use commands outside OSGi.<br />
<br />
So it seemed like a great idea to create a tailor made Karaf distribution, with jclouds integration "<i>out of the box</i>", that would be usable by anyone without having to know anything about Karaf, both as an interactive shell and as a cli. And here it is: <a href="https://github.com/jclouds/jclouds-cli">Jclouds CLI</a>.<br />
<br />
<b>Getting started with Jclouds CLI</b><br />
<b><br /></b>
You can either build the cli from source, or download a tar ball. Once you extract it, you'll find a structure like:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2ynVxUbQgYROzfQC-L0k8RIPqtLMHnSeHgIMi99yNS5qVmvo-daXI1PlNT0cYdUzZdb8Xi4fhwSjtBLxesa5VUH_WkdxeEVR7NQC5p2Och-DfuvgLgjD78YXtqZgdaW_JjpBkYsQaD0s/s1600/Screen+Shot+2012-07-14+at+7.54.36+%CE%BC.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="80" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2ynVxUbQgYROzfQC-L0k8RIPqtLMHnSeHgIMi99yNS5qVmvo-daXI1PlNT0cYdUzZdb8Xi4fhwSjtBLxesa5VUH_WkdxeEVR7NQC5p2Och-DfuvgLgjD78YXtqZgdaW_JjpBkYsQaD0s/s320/Screen+Shot+2012-07-14+at+7.54.36+%CE%BC.%CE%BC..png" width="320" /></a></div>
<br />
The bin folder contains two scripts:<br />
<br />
<ol>
<li><b>jclouds-cli</b>: Starts up the interactive shell.</li>
<li><b>jcouds: </b>Script through which you invoke jclouds operations.</li>
</ol>
<br />
The zip distribution provides the equivalent bat files for windows.<br />
<br />
Let's start with the jclouds script. The script takes two parameters, multiple options and arguments. The general usage is:<br />
<br />
<b>./jclouds [category] [action] [options] [arguments]</b><br />
<b><br /></b>
<b>Category: </b>The type of command to use. For example node, group, image, hardware, image etc.<br />
<b>Action: </b>The action to perform on the category. For example: list, create, destroy, run-script, info etc.<br />
<br />
All operations wether the are compute service or blobstore operations, will require a provider or api and valid credentials for that provider/api. All of these can be specified as option to the command. For example to list all running nodes on Amazon EC2:<br />
<br />
<script src="https://gist.github.com/3744793.js"></script>
For apis you also need to specify the endpoint, for example the same operation for <a href="http://www.cloudstack.org/">Cloudstack</a> can be:<br />
<br />
<script src="https://gist.github.com/3744818.js"></script>
Of course, you might don't want to specify the same options again and again. In this case you can just specify them as environmental variables. The variable name are always in capital characters and prefixed with <b>JCLOUDS_COMPUTE_</b> or <b>JCLOUDS_BLOBSTORE_ </b>for compute service and blobstore operations respectively. So the --provider option would match <b>JCLOUDS_COMPUTE_PROVIDER</b> for compute service or <b>JCLOUDS_BLOBSTORE_PROVIDER</b> for blob stores.<br />
<br />
The picture below shows a sample usage of the cli inside an environment setup for accessing EC2. The commands create 3 nodes on EC2 and then destroy all of them.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9WGM9QDdIaq7BxDuGDV7jpEHa9LEe7EJqSAxVpmi0SWIBKVLJb7Ofltbwwj-xoh9Af-AuUwKaKAoMHbhQz4bOIRFkj7tlAhMH5_qIaCbZmkAl4Ua-SHmbUGZHfar1e1bBUhhVnJSLlRs/s1600/Screen+Shot+2012-09-06+at+10.59.06+%CF%80.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="201" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9WGM9QDdIaq7BxDuGDV7jpEHa9LEe7EJqSAxVpmi0SWIBKVLJb7Ofltbwwj-xoh9Af-AuUwKaKAoMHbhQz4bOIRFkj7tlAhMH5_qIaCbZmkAl4Ua-SHmbUGZHfar1e1bBUhhVnJSLlRs/s640/Screen+Shot+2012-09-06+at+10.59.06+%CF%80.%CE%BC..png" width="640" /></a></div>
<br />
<br />
The environmental variables configured are:<br />
<b><br /></b>
<b>JCLOUDS_COMPUTE_PROVIDER </b>aws-ec2<br />
<b>JCLOUDS_COMPUTE_IDENITY </b>????<br />
<b>JCLOUDS_COMPUTE_CREDENTIAL </b>???<br />
<br />
<br />
When using the jclouds script all providers supported by jclouds will be available by default. You can add custom providers and apis, by placing the custom jars under the system folder <i>(preferably using a maven like directory structure).</i><br />
<i><br /></i>
<b>Using the interactive shell</b><br />
<b><br /></b>
The second flavor of the jclouds cli is the interactive shell. The interactive shell works in a similar manner, but it also provides the additional features:<br />
<br />
<br />
<ul>
<li><b>Service Reusability</b></li>
<ul>
<li>Services are created once</li>
<li>Commands can reuse services resulting in faster execution times.</li>
</ul>
<li><b>Code completion</b></li>
<ul>
<li>Completion of commands</li>
<li>Completion of argument values and options</li>
</ul>
<li><b>Modularity</b></li>
<ul>
<li>Allows you to install just the things you need.</li>
</ul>
<li><b>Extensible</b></li>
<ul>
<li>You can add commands of your own.</li>
<li>You can additional project. </li>
<ul>
<li>Example: As of Whirr 0.8.0 you can install it to any Karaf based environment. So you can add it to the cli too.</li>
</ul>
</ul>
</ul>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBKZGS49xrDEDz6m2ID6iwSddIGNMPE0iGE41Yf2pyLz9P_GfYPDRVwPmkZX9hv08PgyC9Tr01tR4F7NE0l5Ke7eWa9uMbe4s_dAbxfsPwXQ5SplVmlX1SiLSCmNX7YotV-pvAPfGdPC0/s1600/Screen+Shot+2012-09-06+at+11.07.03+%CF%80.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="348" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBKZGS49xrDEDz6m2ID6iwSddIGNMPE0iGE41Yf2pyLz9P_GfYPDRVwPmkZX9hv08PgyC9Tr01tR4F7NE0l5Ke7eWa9uMbe4s_dAbxfsPwXQ5SplVmlX1SiLSCmNX7YotV-pvAPfGdPC0/s640/Screen+Shot+2012-09-06+at+11.07.03+%CF%80.%CE%BC..png" width="640" /></a></div>
<div>
<br /></div>
<div>
In the example above we created a reusable service for EC2 and then we performed a node list, that displayed the nodes that we created and destroyed in the previous example.</div>
<br />
<br />
<b>Using the interactive shell with multiple providers or apis</b><br />
<b><br /></b>
The interactive shell will allow you to register compute service for multiple providers and apis or even multiple service for the same provider or api using different configuration parameters, accounts etc.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5hWCzzBR3oJ6Mjk763sI04j9g_Gl60rbN7iyo0U3kvpBO_DxHDhVHTdjojo2Pfyx7GTbT7gtPg2JTouyCHsww2DsKy72trHE9PySSe-11znClqXex14KAkluYHxCIaKIMtlG6YGPnMjI/s1600/Screen+Shot+2012-09-18+at+7.34.14+%CE%BC.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="470" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5hWCzzBR3oJ6Mjk763sI04j9g_Gl60rbN7iyo0U3kvpBO_DxHDhVHTdjojo2Pfyx7GTbT7gtPg2JTouyCHsww2DsKy72trHE9PySSe-11znClqXex14KAkluYHxCIaKIMtlG6YGPnMjI/s640/Screen+Shot+2012-09-18+at+7.34.14+%CE%BC.%CE%BC..png" width="640" /></a></div>
<br />
<br />
<br />
The image above displays how you can create multiple services for the same provider, with different configuration parameters. It also show how to specify which service to use in each case. Note again that in this example the identity and provider was not passed but were provided as environmental variables.<br />
<br />
<b>Modular nature of the interactive mode</b><br />
<b><br /></b>
As mentioned above the interactive shell is also modular, allowing you to add / remove modules at runtime. A module can be support for a provider or api, but it can be any type of extension you may need.<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
To see the list of available providers and api that can be used in the interactive mode, you can use the <b>features:list</b> and <b>features:install</b> commands. In the example below we list the features and grep for "<b>openstack</b>" and then install the jclouds openstack-nova api. Then we create a service for it and list the nodes in our openstack.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL3keeR-CpXb0s7eQnxlpJ3T4dRQ7XfHXChmvdUNiqzD0mGzFNtIqYwumR4sxmP9lq-RuV_UEWH29UW06WPbNYlzlcTXbSswkFp7sr8MX6Rh7Xk7fYlqvgTWSEa0u3iTcX9yUJVdwprSk/s1600/Screen+Shot+2012-09-18+at+8.47.07+%CE%BC.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL3keeR-CpXb0s7eQnxlpJ3T4dRQ7XfHXChmvdUNiqzD0mGzFNtIqYwumR4sxmP9lq-RuV_UEWH29UW06WPbNYlzlcTXbSswkFp7sr8MX6Rh7Xk7fYlqvgTWSEa0u3iTcX9yUJVdwprSk/s640/Screen+Shot+2012-09-18+at+8.47.07+%CE%BC.%CE%BC..png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<b>Configuring the command output</b><br />
<b><br /></b>
Initially the command output was designed and formatted, using the most common cloud providers as a guide. However, the output was not optimal for all providers <i>(different widths etc). </i>Moreover, different users needed different things to be displayed.<br />
<br />
To solve that problem the cli uses a table-like output for the commands, with auto-adjustable column sizes to best fit the output of the command. Also the output of the commands is fully configurable.<br />
<br />
Each table instance is feed the display data as a collection which represents table rows. The column headers are read from a configuration file. The actual value for each cell is calculated using JSR-233 script expressions (by default it uses groovy), which are applied for each row and column. Finally the table supports sorting by column.<br />
<br />
A sample configuration for the hardware list command can be something like:
<br />
<script src="https://gist.github.com/3744770.js">
</script>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
With this configuration the image list command will produce the following output:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_WTGZ0_J822BEVbVh04Bd66VXkNNa3bo5ryiFVFPzLQJJzWHLecx9ltbdyOnx9-B_BL_MKIdFmWDPxJ10YIvFfI1HF1sKXe8lJ208hKoSK61RooGaipUthgVyoN1lFU5SOOtTB50cJUA/s1600/Screen+Shot+2012-09-18+at+11.03.56+%CE%BC.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_WTGZ0_J822BEVbVh04Bd66VXkNNa3bo5ryiFVFPzLQJJzWHLecx9ltbdyOnx9-B_BL_MKIdFmWDPxJ10YIvFfI1HF1sKXe8lJ208hKoSK61RooGaipUthgVyoN1lFU5SOOtTB50cJUA/s320/Screen+Shot+2012-09-18+at+11.03.56+%CE%BC.%CE%BC..png" width="292" /></a></div>
<br />
<br />
We can modify the configuration above and add an additional column, that will display the volumes that are assigned to the current hardware profile. In order to do so we need to have a brief idea of how the jclouds hardware object looks like:<br />
<br />
<script src="https://gist.github.com/3745438.js">
</script>
<br />
<br />
So in order to get all the volume size and the type of the volume we could use the following expression on the hardware object: <b>hardware.volumes.collect{it.size + "GB " + it.type}</b>.<br />
<br />
<br />
<br />
The updated configuration would then look like:<br />
<script src="https://gist.github.com/3745466.js">
</script>
<br />
The new configuration would produce an the following output on EC2:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZSivVKI73iObyaRiLyyyFuj1kYuS8zqZ_YbpL2D0oNt8rLTY2ZU3FMgYIv6rWcrVCI8kVniKFh_nOxQ51fSCl8zbBRjRDNmd9N7-ltbgTjDQfqH_yylFXH8w2SWzIo86gAxCt4zsIFiE/s1600/Screen+Shot+2012-09-18+at+9.39.57+%CE%BC.%CE%BC..png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZSivVKI73iObyaRiLyyyFuj1kYuS8zqZ_YbpL2D0oNt8rLTY2ZU3FMgYIv6rWcrVCI8kVniKFh_nOxQ51fSCl8zbBRjRDNmd9N7-ltbgTjDQfqH_yylFXH8w2SWzIo86gAxCt4zsIFiE/s640/Screen+Shot+2012-09-18+at+9.39.57+%CE%BC.%CE%BC..png" width="640" /></a></div>
<br />
You can find the project at github: <a href="https://github.com/jclouds/jclouds-cli">https://github.com/jclouds/jclouds-cli</a>. Or you can directly download the tarballs at: <a href="http://repo1.maven.org/maven2/org/jclouds/cli/jclouds-cli/1.5.0/">http://repo1.maven.org/maven2/org/jclouds/cli/jclouds-cli/1.5.0/</a>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-20393333724981294162012-07-08T13:35:00.000-07:002012-09-13T23:44:39.043-07:00Apache Karaf meets Apache HBase<h2>
Introduction</h2>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable. If you are a regular reader most probably you already know what Apache Karaf is, but for those who are not: Apache Karaf is an OSGi runtime that runs on top of any OSGi framework and provides you a set of services, a powerful provisioning concept, an extensible shell & more.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Since Apache HBase is not OSGi ready <i>(yet)</i>, people that are developing OSGi applications often have a hard time understanding how to use HBase inside OSGi.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">This post explains how you can build an OSGi application that uses HBase. Please note, that this post is not about running parts of HBase inside OSGi, but focuses on how to use the client api inside OSGi.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">As always I'll be focusing on <a href="http://karaf.apache.org/">Karaf</a> based containers, like <a href="http://servicemix.apache.org/">Apache ServiceMix</a>, <a href="http://creating%20hbase%20client%20configuration%20inside%20osgi/">Fuse ESB</a> etc, but most of the things inside this post are generally applicable to all OSGi runtimes.</span></span></div>
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">HBase and OSGi</span></span></h2>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Let's have a closer look at HBase and explain some things about its relation with OSGi.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><b>Bad news</b></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><b><br /></b></span></span></div>
<div>
<ul>
<li><span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">HBase provides no OSGi metadata, which means that you either need to wrap HBase yourself or find a 3rd party bundle for HBase.</span></span></li>
<li><span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">HBase comes in as a single jar.</span></span></li>
<li><span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Uses Hadoop configuration.</span></span></li>
</ul>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The first point is pretty straightforward.</span></span></div>
</div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The second point might not seem as bad news with a first glance, but if you give it some thought you will realize that when everything is inside a single jar things are not quite modular. For example the client api is inside the same jar, with the avro & thrift interfaces and even if you don't need them, they will still be there. So that jar contains stuff that may be totally useless for your use case.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Please note, that the single jar statement does not refer to dependencies like Hadoop or Zookeeper.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The fact that is HBase depends on the Hadoop configuration loading mechanisms, is also bad news, because some versions of Hadoop are a bit itchy when running inside OSGi.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<b style="font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;">Good news</b></div>
<div>
<b style="font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;"><br /></b></div>
<div>
<ul>
<li><span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">There are no class loading monsters inside HBase, so you won't be really bitten when you are trying to use the client api inside OSGi.</span></span></li>
</ul>
<div>
<b style="font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;">The challenges</b></div>
</div>
<div>
<b style="font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;"><br /></b></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">So there are two types of challenges, the first is to find or create a bundle for HBase that will have requirements that make sense to your use case. The second is to load the hbase client configuration inside OSGi.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Finding a bundle for HBase</span></span></h2>
</div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">As far as I know, there are bundles for HBase provided by the <a href="http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.hbase/">Apache ServiceMix Bundles</a>. However, the bundles that are currently provided, have more requirements in terms of required packages than they are actually needed <i>(see bad news, second point). </i>Providing a bundle with more sensible requirements is currently a work in progress, and hopefull will be released pretty soon.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">In this port I am going to make use of the </span><a href="http://team.ops4j.org/wiki/display/paxurl/Wrap+Protocol" style="font-size: 14px; line-height: 20px;">Pax Url Wrap Protocol</a><span style="font-size: 14px; line-height: 20px;">. The wrap protocol will create on the fly OSGi metadata for any jar. Moreover, all package imports will be marked as optional, so you won't have to deal with unnecessary requirements. This is something that can get you started, but its not recommended for use in a production environment. So you can use it in a P.O.C. but when its time to move to production, it might be a better idea to use a proper bundle.</span></span><br />
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span>
<br />
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Creating a Karaf feature descriptor for HBase</span></span></h2>
</div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">After experimenting a bit, I found that I could use HBase inside Karaf, by installing the bundles listed in the feature descriptor below:</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<br />
<script src="https://gist.github.com/3071871.js">
</script>
In fact this feature descriptor is almost identical to the feature descriptor provided by the latest release of <a href="http://camel.apache.org/">Apache Camel</a>. One difference is the version of <a href="http://hadoop.apache.org/">Apache Hadoop</a> used. I preferred to use in this example a slightly lower version of Apache Hadoop, which seems to behave a bit better inside OSGi.<br />
<br />
<br />
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Creating HBase client configuration inside OSGi</span></span></h2>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The things described in this section may vary, depending on the version of the Hadoop jar, that you are using. I'll try to provide a general solution that covers all cases.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Usually, when configuring the hbase client, you'll just need to keep an hbase-site.xml inside your classpath. Inside OSGi this is not always enough. Some version of hadoop will manage to pick up this file, some others will not. In many cases hbase will complain that there is a version mismatch between the current version and the one found inside hbase-defatult.xml.</span></span><br />
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">A workaround is to set the </span></span><span style="background-color: white; font-size: 14px; line-height: 20px;"><span style="font-family: verdana, arial, helvetica, sans-serif;">hbase.defaults.for.version to match your HBase version:</span></span><span style="background-color: white; font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;"> </span></div>
<script src="https://gist.github.com/3078003.js">
</script>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">An approach that will save you in most cases, is to use set the hbase bundle classloader as the thread context class loader before creating the configuration object.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: 'Bitstream Vera Sans Mono', Courier, monospace; font-size: 12px; line-height: 16px; text-align: left; white-space: pre;"><b>Thread.currentThread().setContextClassLoader(HBaseConfiguration.class.getClassLoader());</b></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The reason I am proposing this, is that hbase will make use of the thread context classloader, in order to load resources <i>(hbase-default.xml and hbase-site.xml). </i>Setting the TCCL will allow you to load the defaults and override them later.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The snippet below shows how you can set the TCCL in order to load the defaults directly from the hbase bundle.</span></span></div>
<br />
<script src="https://gist.github.com/3072625.js">
</script>
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; font-weight: normal; line-height: 20px;">Note, that when following this approach you will not need to include the hbase-site.xml inside your bundle. You will need to set the configuration programmatically. </span></span></h2>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Also note, In some cases HBase internal classes will recreate the configuration and this might cause you issues, if HBase can't find the right classloader.</span></span></div>
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Thoughts</span></span></h2>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">HBase is no different than almost any library that doesn't provide out of the box support for OSGi. If you understand the basics of class loading, you can get it to work. Of course understanding class loaders is something that sooner or later will be of use, no matter if you are using OSGi or not.</span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">The next couple of weeks, I intend to take HBase for a ride on the back of the <a href="http://camel.apache.org/">camel</a>, using the brand new <a href="http://camel.apache.org/hbase.html">camel-hbase</a> component inside OSGi, so stay tuned.</span></span><br />
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span>
<br />
<h2>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;">Edit: <span style="font-weight: normal;">The original post has been edited, as it contain a snippet, which I found out that its best to be avoided (sharing the HBase configuration as an OSGi service). </span></span></span></h2>
</div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<span style="font-family: verdana, arial, helvetica, sans-serif;"><span style="font-size: 14px; line-height: 20px;"><br /></span></span></div>
<div>
<b style="font-family: verdana, arial, helvetica, sans-serif; font-size: 14px; line-height: 20px;"><br /></b></div>
Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-70860659396653274782012-06-28T02:54:00.002-07:002012-06-28T03:05:48.477-07:00Red Hat acquires FuseSource: Shaping the FutureOn 27 June 2012, <a href="http://www.redhat.com/">Red Hat</a> announced the acquisition of <a href="http://fusesource.com/">FuseSource</a>. This move will definitely shape the future of OSS integration.<br />
<br />
Of course, this is not "<i>yet another blog post</i>" about spreading the news, it's more like a "<i>personal thoughts about the acquisition</i>".<br />
<h2>
<b>The past</b></h2>
When I started studying computer science back in the 90s, I needed a brand new PC to start working on. Before I even got that new PC, I made sure that I have my hands on the operating system of my choice:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir9ey0W2-6vSukIhKHd2R2r37lqsVGZezai_6Kk502wgTdzMsKcHuJ1JsrCcq_SBbd8EaX6EdfPfyGR1dWhQLnZ5gTg57Lf5EpoSfxhx8CvEXhqfDCJQPsaHlQ1Cq7qoq0eR591bgcXjo/s1600/Redhat_5.2_box.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir9ey0W2-6vSukIhKHd2R2r37lqsVGZezai_6Kk502wgTdzMsKcHuJ1JsrCcq_SBbd8EaX6EdfPfyGR1dWhQLnZ5gTg57Lf5EpoSfxhx8CvEXhqfDCJQPsaHlQ1Cq7qoq0eR591bgcXjo/s320/Redhat_5.2_box.jpg" width="267" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">My first Linux </td></tr>
</tbody></table>
<br />
The last thing I could imagine back then, is that the company with the cool looking hat would acquire the company I am working at. And how could I? I would at least need a time machine in order to do so.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNqXk5vZ-gBzQUQRsq5otexPG_neJYAvg6MWCbku9YRlPXBEjE9wNJEgqVS1srvz8inpDid1lbG_28YxKPv0lNSEJVG4r1BodgJOYaSIiORsCso-5Fmu8uUVijkGex-J6jc367XKF4JwY/s1600/DMC-12.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><span style="color: black;"><img border="0" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNqXk5vZ-gBzQUQRsq5otexPG_neJYAvg6MWCbku9YRlPXBEjE9wNJEgqVS1srvz8inpDid1lbG_28YxKPv0lNSEJVG4r1BodgJOYaSIiORsCso-5Fmu8uUVijkGex-J6jc367XKF4JwY/s320/DMC-12.jpg" width="320" /></span></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="background-color: white;"><span style="font-size: small;">A time machine from Back to the Future</span></span></td></tr>
</tbody></table>
<br />
<h2>
<b>The present</b></h2>
Ironically enough, the time machine in the picture above <i>(from the famous trilogy back to the future)</i> was set, to travel to 27 June 2012 <i>(the day the acquisition was announced)</i>. Which is just awesome, because it allows me to use it in this post and add a funny tone.<br />
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9bYXzTIW1Mi2nbek8n2nxgRcaMS-eiqpdY2qdESCTP97-deprZG5qj-1Ws6oYKVIJnRZURwKyS9azoHLz34PbVSK-tAqiKCbwStjxOAGULGvp_-whUiWxr21B3I3lEmwlDQS0ao034so/s1600/Awag5koCEAEwdsw.jpg-large" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9bYXzTIW1Mi2nbek8n2nxgRcaMS-eiqpdY2qdESCTP97-deprZG5qj-1Ws6oYKVIJnRZURwKyS9azoHLz34PbVSK-tAqiKCbwStjxOAGULGvp_-whUiWxr21B3I3lEmwlDQS0ao034so/s320/Awag5koCEAEwdsw.jpg-large" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">"Back to the Future" control panel</td></tr>
</tbody></table>
So here we are today. The acquisition is announced, <a href="http://fusesource.com/">FuseSource</a> is joining <a href="http://www.redhat.com/">Red Hat</a>'s Middleware division unit.<br />
<h2>
<b>The future</b></h2>
<div>
So what happens next? Well, when you bring together teams of indisputable talent you open up so many possibilities, that is making the future really hard to predict.</div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWA2ldEP8AqOYdviRL1tzA7vqLekfJR5uo91PlMWRxMXwnR3bCyX1wEeErNh3N84XNr0V-B77mMxgUoz6WaRALEO4hxfLD9WNLoC0HKZDdzR5qAzSuKy54GX4mK-nImzf1RpN-LQ9s6N0/s1600/EmpireYoda1.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWA2ldEP8AqOYdviRL1tzA7vqLekfJR5uo91PlMWRxMXwnR3bCyX1wEeErNh3N84XNr0V-B77mMxgUoz6WaRALEO4hxfLD9WNLoC0HKZDdzR5qAzSuKy54GX4mK-nImzf1RpN-LQ9s6N0/s320/EmpireYoda1.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Yoda: "Impossible to see, the future is"</td></tr>
</tbody></table>
<br />
<div>
One thing is for sure. The future will definitely be really exciting and this move will definitely shape the future of OSS integration.</div>
<div>
<br /></div>
<div>
<h2>
<b>Thoughts</b></h2>
</div>
<div>
I am really happy the <a href="http://fusesource.com/">FuseSource</a> has found a new home. I am more excited the <a href="http://fusesource.com/">FuseSource</a> new home is an open source company. In fact, the <a href="http://fusesource.com/">FuseSource</a> development and business model is the one introduced by <a href="http://www.redhat.com/">Red Hat</a>, so this is a sign that both companies have a common understanding on how to do open source.</div>
<div>
<br /></div>
<div>
<a href="http://fusesource.com/">FuseSource</a> provided me with the best job I've ever had, mostly due to the fact that I had the luck to work with highly skilled people. The possibility of working in a wider environment built on top of the same principals <i>(high skill & extra coolness) </i>can only be thrilling.</div>
<div>
<b><br /></b></div>
<div>
<b><br /></b></div>
<div>
<br />
<br /></div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com8tag:blogger.com,1999:blog-1786615818482917324.post-37369077273866709422012-06-13T18:03:00.003-07:002012-06-13T18:03:48.724-07:00OSGification: The good, the bad & the purist<b>Prologue</b><br />
If the title doesn't make it obvious this post is all about <b><i>OSGification</i></b>. It's an attempt to present and categorize practices in the following categories:<br />
<ul>
<li>Good practices</li>
<li>Bad practices</li>
<li>Pure practices</li>
</ul>
<div>
Some consider consider pure solutions to be the best. I fully agree that they are the best, when building a project from scratch. When migrating existing projects to OSGi you sometimes need to make sacrifices and follow a not so pure path. Especially when we are talking about libraries or frameworks that can also live outside OSGi, too.</div>
<div>
<br /></div>
<div>
<b>Purpose</b></div>
<div>
The main reason I am writing this is that I often encounter people that consider "<i>pure practices</i>" panacea and will not consider adopting a less pure solution, even if this means not adopting a solution at all. So this is an attempt to present the pros and cons of each approach, so that you can make your own conclusions.<br />
<br />
This post assumes that you have basic understanding of OSGi.<br />
<br />
<b>How do I make a project OSGi compliant?</b><br />
In this section I will give a really brief overview of the OSGification process. Keep in mind that its "really brief".<br />
<br />
To make your project OSGi compliant you need to take the following steps<br />
<ol>
<li><b><i>Provide a proper MANIFEST with proper package imports/exports etc.</i></b></li>
<li><b><i>Resolve class loading issues.</i></b></li>
<li><b><i>Make sure that all runtime dependencies are OSGi compliant. </i></b></li>
</ol>
</div>
<div>
In order to provide a correct MANIFEST for your bundle, you need to identify which are the runtime requirements of your bundle. Gather all the packages that your bundle references at runtime and add them to the Import-Package section of your bundle. For each of the packages one or more versions of the package might satisfy the needs of your bundle so you can use version ranges. For example:</div>
<div>
<br /></div>
<div>
<b>Import-Package</b>: org.slf4j;<i>version</i>=[<span style="color: #660000;">1.4</span>,<span style="color: #660000;">2</span>)</div>
<div>
<br /></div>
<div>
Tools that can aid you in this process are <a href="http://bnd/">bnd</a>, the <a href="http://felix.apache.org/site/apache-felix-maven-bundle-plugin-bnd.html">maven-bundle-plugin</a> etc. Those tools will do a great job in the process of aiding you in identifying those packages for you. But is this enough? Well, not always. Some times a package can be used without being referenced directly in the source <i>(by using the classloaders etc)</i>. This is something that you have to deal on your own <i>(will get to that later)</i>. </div>
<div>
<br /></div>
<div>
Once you are through in creating the imports, you also need to specify the exports of your bundle. This is easy. All you need to do is to specify all the packages of your bundle that you want to be accessed/imported by other bundles. Also specifying a version for those packages is important, because this will allow you to make use of the version oriented OSGi features <i>(e.g. multiple versions, version ranges etc)</i>. </div>
<div>
<br /></div>
<div>
This will be enough for installing your bundle inside an OSGi container, but it will not always guarantee the runtime behavior of your bundle. As there might be class loading issues you need to resolve.</div>
<div>
<br /></div>
<div>
<u>Common Class Loading issues</u></div>
<div>
<u><br />
</u></div>
<div>
<b>Class.forName() [1] </b>This generally needs to be avoided inside OSGi. In most cases it will fail resulting in a ClassNotFoundException. You can read more about it in Neil Bartlett blog about <a href="http://njbartlett.name/2010/08/30/osgi-readiness-loading-classes.html">OSGi readiness</a>. You can work around this problem by specifying the class loaders that can load the class <i>(replace it with classLoader.loadClass())</i>. </div>
<div>
<br /></div>
<div>
The problem is to know which is the class loader to use. If your bundle somehow imports <i>(lot of ways to achieve this)</i> the package of the target class, then the you can use the bundle class loader. If not things get a bit more complicated. </div>
<div>
<b><br />
</b></div>
<div>
<b>Allowing your classes to be loaded from other bundles [2]</b></div>
<div>
The same problem that your bundle will have in order to load classes, will have other bundles loading your classes. Unfortunately there is no global solution for this and its cases can be treated differently. It depends on the bundle, the framework and the sacrifices you are willing to make.</div>
<div>
<b><br />
</b></div>
<div>
<b>Singletons & statics in OSGi [3] </b>A lot of people do not actually realize that a singleton pattern guarantees a single instance per ClassLoader. In OSGi this means that each bundle will get its own instance of the singleton. There is nothing wrong with the pattern, it has to do with the way that java handles static variables <i>(single instance per ClassLoader)</i>. So when using statics, make sure the don't directly cross bundle boundaries.<br />
<br />
<b>Java Service Loader [4] </b>Java allows you to load object that are defined under META-INF/services/yet.another.interface. This works great when using flat class loaders but will not work that great when your class loaders are isolated. </div>
<div>
<br /></div>
<div>
<b>Bad approaches</b><br />
This section discusses about approaches that are generally considered as bad. I'd say don't be afraid to use one of them just because they are generally considered bad, as long as you know the side effects and you are ready to live with them. Maybe the term last resort would be more appropriate to describe them.<br />
<b><br /></b><br />
<u>Creating Uber Jars</u><br />
<u><br /></u><br />
Sometimes people in order to avoid the overhead of bundling all their runtime dependencies and also avoid class loading issues between their bundles and their dependencies they are bundling everything together under a single bundle. This results in having everything loaded from a single class loader which will eliminate challenges [1], [2] & [3] which were mentioned above.<br />
<br />
This comes at the cost that you will not be able to add, remove or update a part of your application, since everything is part of the same artifact. I would avoid that approach, but its still better than nothing.<br />
<br />
<u>Fragments & Dynamic Imports</u><br />
<u><br /></u><br />
A fragment can attach itself on one existing bundle. Once the bundle gets attached both bundles can share classes and resources. This sounds cool, but there are two things that you might want to consider. The first one is that a bundle can be attached to one and only one bundle <i>(can have a single host)</i> and that may be limiting for your needs. The second problem is that in order to attach a bundle to a "host" bundle, you will need to refresh the host bundle. That will cause a chain reaction refreshing all bundles that depend from the host. The refreshing action will restart the activator of the bundle that is being refreshed and that is not always nice.<br />
<br />
Dynamic Imports is an approach that is used for dealing with class loading issues [1] and it actually allows you to specify imports with wildchards. This usally serves the need of loading classes from packages that are not known in advance. However, this can have lot of side effects, such as unwanted wirings between bundles that can affect the process of adding removing or updating bundles.<br />
<br />
I would use fragments if I had no other means of solving my problem, but I would avoid dynamic imports at all.<br />
<br />
<b>Good approaches</b><br />
This section discusses about approaches that are generally applied, but they are not really pure. This means that even though they do work without serious side effects there are not the best thing to do. However, in many cases they are a realistic approach that will get you to the "OSGi ready" ready state.<br />
<br />
<br />
<u>Thread Context ClassLoader</u><br />
<u><br /></u><br />
I tried to explain above why the Class.forName is something that will probably not work inside OSGi. But there are a lot of libraries out there that heavily rely on it. On top of that there is a good chance that you are using it too inside your application. A potential solution is to "fallback" to the thread context class loader <i>(a.k.a TCCL)</i>.<i> </i>This approach is based on the fact that a library may not be able to possibly know how to load class using its name, but the caller of the library might do. So the caller may set the thread context class loader and the library may use that to load classes.<br />
<br />
Imagine a library that deserializes data into Java Objects. That library will try to load the class most probably using Class.forName and will fail. If you modified the library to "fallback" to the TCCL if it failed to load the class, you could have the code that uses that library to set the value of TCCL just before calling the library and then restore it to its original value after the invocation. Of course this assumes that the class loader you are going to use as TCCL is able to load the class. In many cases that is valid for a bundle that uses a library and this is why it usually works.<br />
<br />
Some real world examples:<br />
<br />
<ul>
<li><a href="http://www.hazelcast.com/">Hazelcast</a>.</li>
<li><a href="http://camel.apache.org/">Apache Camel</a>.</li>
<li><a href="http://www.jclouds.org/">Jclouds</a></li>
</ul>
<br />
The fact that this approach assumes that you will be able to set the TCCL right before the invocation and also that the caller bundle will be able to load the target class does not guarantee that the approach will work in all cases. In most cases it does, yet there still might be cases were it will fail.<br />
<br />
This is also the reason why many consider this approach "<i>a bad practice</i>". In practice I see more and more libraries and frameworks using this approach and it seems that it works with no side effects. I think that the key here is to know when to use it and when to go by a more pure approach.<br />
<br />
<br />
<b>Pure approaches</b><br />
<b><br /></b><br />
<u>Object Factories and Resource Loaders</u><br />
<u><br /></u><br />
With this approach you avoid direct loading of the class or the resource and instead you delegate to a Factory or Loader. For application that is intended to run both in and out of OSGi you can have a default implementation that will assume flat class loaders, but inside OSGi you can have an implementation that makes use of OSGi services in order to load classes, create objects or load resources.<br />
<br />
<u>Passing the Class Loader</u><br />
<u><br /></u><br />
Structuring your API in such a way that whenever it comes to loading classes, to allow the passing the class loader. Although this has the least possible side effects, its not always feasible.<br />
<br />
<u>Use a BundleListener as a complement to the ServiceLoader</u><br />
<br />
As I already mentioned above the Java ServiceLoader will not work that well inside OSGi. An approach that you can follow to work around this problem, is to use a bundle listener that will listen for bundle events and for each bundle that gets installed, it will look for META-INF/services/yet.another.interface. It can then use the bundle class loader to load and instantiate the implementation of the Service. Finally it can either register it to the OSGi service registry or to a local registry from which the bundle can look the service up.<br />
<br />
Please note, that I am not sure if this is commonly used practice <i>(haven't seen it documented)</i>, but it worked for me quite well in the past, without side effects so I decided to add it in this section. Feel free to drop a comment if you think the opposite.<br />
<br />
Also note, that there is also <a href="http://aries.apache.org/modules/spi-fly.html">Apache Aries - SPI Fly</a> which intends to provide a global solution to the bridging the java service loader with the OSGi service registry. Finally, I've read some things about the OSGi 5 and some work related to the java service loader, but I don't know more than that yet.<br />
<br />
<b>Final Thoughts</b><br />
<b><br /></b><br />
I'll repeat myself by saying that the initial motivation for this blog post was the fact that every now and then I encounter people that are "<i>pure or nothing</i>". I think that its better to go by a not sure pure approach, that do nothing. Especially for existing projects, the road to OSGification can be cross in many steps and not single one.<br />
<br />
The last year I've been spending time on <a href="http://www.jclouds.org/">Jclouds</a> working on the OSGi support. The initial approach was to just get it running in OSGi. In every single release of jclouds improvements are applied and with the help and feedback of the community, some questionable approaches have been replaced with more solid ones. I feel that this attitude should be an example for projects that consider providing OSGi support. "<b>No need to go for all or nothing. Step by step is also an option!</b>"<br />
<u><br /></u><br />
<u><br /></u></div>
<div>
<b><br />
</b></div>
<div>
<b><br />
</b></div>
<div>
<b><br />
</b></div>
<div>
<b><br />
</b></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com2tag:blogger.com,1999:blog-1786615818482917324.post-88201992446325706892012-03-18T09:01:00.001-07:002012-03-18T09:04:11.081-07:00Fuse Fabric and Camel<b>Prologue</b><br />
<b><br /></b><br />
I have already blogged about <a href="http://fusesource.com/apache-camel-conference-2012/">CamelOne</a> which is going to take place on May 15 - 16 at Boston this year. I am really excited about it, as a lot of new things are going to be talked about there.<br />
<br />
So I'd like to share my excitement with you and give you a small preview of "Using <a href="http://fabric.fusesource.org/">Fuse Fabric</a> with <a href="http://camel.apache.org/">Apache Camel</a>".<br />
<br />
<b>Fuse Fabric</b><br />
<br />
<a href="http://fabric.fusesource.org/">Fuse Fabric</a> is a distributed configuration, management and provisioning system for <a href="http://karaf.apache.org/">Apache Karaf</a> based containers such as <a href="http://servicemix.apache.org/">Apache ServiceMix</a> and <a href="http://fusesource.com/products/enterprise-servicemix/">Fuse ESB</a>.<b><br /></b><br />
<br />
Fabric provides a distributed configuration registry and also tools for:<br />
<br />
<ul>
<li>Installing and managing containers to your private or public cloud.</li>
<li>Deployment agent for configuring and provisioning distributed containers.</li>
<li>Discovery of Camel endpoints & message brokers.</li>
<li>A lot more ... </li>
</ul>
<div>
<br /></div>
<div>
<b>Fuse Fabric and Camel preview</b></div>
<div>
<b><br /></b></div>
<div>
What I am going to show you, is a video that demonstrates how fabric makes it easy to:</div>
<div>
<ul>
<li>Use a single host for installing containers to your local network.</li>
<li>Deploy and configure applications to distributed containers.</li>
<li>Discover & use message brokers in your camel routes.</li>
</ul>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/VgaBVPgAaa0?feature=player_embedded' frameborder='0'></iframe></div>
<div>
<br />
I hope you enjoyed it! See you at <a href="http://fusesource.com/apache-camel-conference-2012/">CamelOne</a> !</div>
</div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-54898313467617962712012-03-09T01:35:00.000-08:002012-03-09T01:37:05.655-08:00How to deal with common problems in pax-exam-karaf<b>Prologue</b><br />
Back in January I made a post about <a href="http://iocanel.blogspot.com/2012/01/advanced-integration-testing-with-pax.html">advanced integration testing with pax-exam-karaf</a>. That post was pretty successful, as I got tons of questions from people that started using it.<br />
<br />
In this post I am going to write down these questions and provide some answers that I hope that will help you have a more smooth experience with integration testing in <a href="http://karaf.apache.org/">Karaf</a>.<br />
<br />
<b>Where should I place my integration test module?</b><br />
Quite often people want to test a single bundle. In this case having the integration tests hosted inside the bundle itself seems a reasonable choice.<br />
<br />
Unfortunately it is not. <a href="http://team.ops4j.org/wiki/display/paxexam/Pax+Exam">Pax Exam</a> in general will start a new container and will install your tests as a proper OSGi bundle. That means that it expects to find bundle, however the test phase runs before the install phase in maven. That means that when you run the tests the bundle that you are testing will not be installed yet. To avoid this issue, you better host all your integration tests as a separate module.<br />
<br />
<b>The bundle context is not getting injected in my test?</b><br />
For injecting the bundle context in the test <a href="https://github.com/openengsb/labs-paxexam-karaf">Pax Exam Karaf</a> makes use of <i><b>javax.inject.Inject</b></i> annotation. In your classpath there will be also a <b><i>org.ops4j.pax.exam.Inject</i></b> annotation. Make sure that you use the <b style="font-style: italic;">javax.inject.Inject </b>to inject the bundle context. That's very easy to get you confused so be careful.<br />
<br />
<b>My system properties are not visible from within the test?</b><br />
Quite often people customize the behavior of their tests, using system properties. It's really common to configure maven-surefire-plugin to expose maven properties as system properties <i>(e.g. passing credentials to a test, allocating a free port for test & more)</i>.<br />
<br />
That's something really useful, but you have to always remember a small detail. "The test is bootstrapped by one jvm, but runs in an other". That means that specifying system properties in the surefire plugin configuration, will not automagically set these properties to the <a href="http://karaf.apache.org/">Karaf</a> container that will be used as the host of your tests.<br />
<br />
I usually make sure to pass the desired system properties in the target <a href="http://karaf.apache.org/">Karaf</a> container, by adding them in the etc/system.properties file of the container:<br />
<script src="https://gist.github.com/2005723.js">
</script>
The above snippet also shows, how you can configure <a href="http://karaf.apache.org/">Karaf</a> config.properties. The example shows how you can add additional execution environments in your test configuration.
<br />
<br />
<b>How can I have my test extend a class from an other module?</b><br />
You should never forget that under the hood <a href="http://team.ops4j.org/wiki/display/paxexam/Pax+Exam">Pax Exam</a> will create a bundle called probe, that contains all your tests. That means that if your tests extend a class that is not present in the current module, it won't be found by your probe bundle. In older versions of <a href="http://team.ops4j.org/wiki/display/paxexam/Pax+Exam">Pax Exam</a> you could pretty easily modify the contests of the probe and include additional classes from any module visible to your project. In the 2.x series its more complicated.<br />
<br />
The easiest way to go is to make sure your install the bundle that contains the super class to the target container. Here is an example:<br />
<script src="https://gist.github.com/2005782.js">
</script>
<br />
<b>How can I configure the jre.properties of my test?</b><br />
That's something really common, when you want to for example to test <a href="http://cxf.apache.org/">CXF</a> in plain <a href="http://karaf.apache.org/">Karaf</a> and not using ServiceMix/FuseESB etc.<a href="https://github.com/openengsb/labs-paxexam-karaf">Pax Exam Karaf</a> allows you to replace any configuration file with the one of your choice.<br />
<script src="https://gist.github.com/2005818.js">
</script>
<br />
<br />
That's it! I hope you found this useful! Happy testing!Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com25tag:blogger.com,1999:blog-1786615818482917324.post-41697267691338511022012-02-28T12:25:00.000-08:002012-03-09T01:37:14.280-08:00CamelOne 2012Now the the Oscar nomination ceremony is over, the next big event is<b> <a href="http://www.camelone.com/">CamelOne</a></b>. So save the date, <b>15-18 May</b> at Boston.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0OjTZfq5uxAceA_BPbcRDqCC7RRqZHj2gPhv4l6LzljiM3oC8BdCPQ7lqir75Fv7PF1zb-_d3BWkssDuA5L7sqtFfcEdnj6204-UlVHbadsUEmBZLAYQrPJV2NptdeUK73wI40XQ_cqU/s1600/camelone_sig_v1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="90" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0OjTZfq5uxAceA_BPbcRDqCC7RRqZHj2gPhv4l6LzljiM3oC8BdCPQ7lqir75Fv7PF1zb-_d3BWkssDuA5L7sqtFfcEdnj6204-UlVHbadsUEmBZLAYQrPJV2NptdeUK73wI40XQ_cqU/s320/camelone_sig_v1.jpg" width="320" /></a></div><br />
<b>Learn about Camel</b><br />
Professionals using open source integration and messaging, have a great opportunity to learn more about Camel. Besides the presentations and the opportunity to meet & talk with fellow Camel riders, there are also going to be training sessions for <b>ServiceMix</b> & <b>ActiveMQ</b> with <b>Camel</b>.<br />
<br />
<b>Speak about Camel</b><br />
If you are already using it and interested in speaking about your journeys on the back of the camel, you can send your speaking proposals at <span class="s1"><a href="mailto:camelone@fusesource.com">camelone@fusesource.com</a>.</span><br />
<span class="s1"><br />
</span><br />
<div style="text-align: center;"><span class="s1"><b>The camel is waiting for you ...</b></span></div><span class="s1"><b><br />
</b></span><br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRcsMJV5N4c0YbptxN8LrT4j_WIWdN5OvSREQLkmhDhyphenhyphenUvexJ2bC7FmjJ4qz4TCrpzJfEB5Yc-fD7EJqirhpNshiQuoIUemflnEugd7MlreRoZ2s3Ncxi9qGcwuZRQ1UYf3aDuijIp6zs/s1600/animated_camel_v1.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRcsMJV5N4c0YbptxN8LrT4j_WIWdN5OvSREQLkmhDhyphenhyphenUvexJ2bC7FmjJ4qz4TCrpzJfEB5Yc-fD7EJqirhpNshiQuoIUemflnEugd7MlreRoZ2s3Ncxi9qGcwuZRQ1UYf3aDuijIp6zs/s320/animated_camel_v1.gif" width="251" /></a></div><span class="s1"><b><br />
</b></span><br />
<div style="text-align: center;">Save the date: <b>15-18 May</b> at Boston. See you there !!!</div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-21350536878016291742012-01-24T11:56:00.000-08:002012-03-05T12:01:06.488-08:00Advanced integration testing with Pax Exam Karaf<b>Prologue</b><br />
From my first steps in programming, I realized the need of integration testing.<br />
<br />
I remember setting up a macro recorder in order to automate the tests of my first swing applications. The result was a disaster, but I think the idea was in the right direction even though the mean was wrong.<br />
<br />
Since then I experimented with a lot of tools for unit testing, which I think that are awesome, but I always felt that they were not enough for testing my "actual product".<br />
<br />
The last couple of years I have been working with integration projects and also with OSGi and there the challenge of testing the "actual product" is bigger. The scenarios that need to be tested sometimes involve more than one VMs and of course testing OSGi is always a challenge.<br />
<br />
In this post I am going to blog about <a href="https://github.com/openengsb/labs-paxexam-karaf">Pax Exam Karaf</a>, which is an integration testing framework for <a href="http://karaf.apache.org/">Apache Karaf </a>and also the answer to all my prayers. Kudos to <a href="http://www.ohloh.net/accounts/anpieber">Andreas Pieber</a> for creating it.<br />
<br />
<b>Tools for integration tests in OSGi</b><br />
I have been using <a href="http://team.ops4j.org/wiki/display/paxexam/Pax+Exam">Pax Exam</a> for a while now for integration tests in a lot of project I have been working on <i>(e.g. <a href="http://camel.apache.org/">Apache Camel)</a>. </i>Even though its a great tool it does not provide the "feel" that the tests run inside <a href="http://karaf.apache.org/">Apache Karaf</a> but rather inside the underlying OSGi container.<br />
<br />
A tool which is not a testing framework itself, but is frequently used for OSGi testing is <a href="http://code.google.com/p/pojosr/">PojoSR</a>. It actually allows you to use the OSGi service layer without using the module layer <i>(if you prefer its an OSGi service registry that run without an OSGi container). </i>So its sometimes used for testing OSGi services etc. A great example is the work of <a href="http://gnodet.blogspot.com/">Guillaume Nodet</a> <i>(colleague of mine at </i><b style="font-style: italic;"><a href="http://fusesource.com/">FuseSource</a> </b><i>and yes I know that he doesn't need introduction) </i>for testing <a href="http://camel.apache.org/">Apache Camel</a> routes that use the OSGi blueprint based on <a href="http://code.google.com/p/pojosr/">PojoSR</a>. You can read more about it in <a href="http://gnodet.blogspot.com/2012/01/unit-testing-camel-blueprint-routes.html">Guillaume's post</a>. Very powerful but yet it is not inside Karaf <i>(it tests the routes in blueprint, but it doesn't test OSGi stuff)</i>.<br />
<br />
<a href="http://www.jboss.org/arquillian">Arqullian</a> is an other effort that among others allow you to run OSGi integration tests but I haven't used that myself.<br />
<b><br />
</b><br />
<b>Pax Exam Karaf</b><br />
Each of the tools mentioned in the previous section are great and they all have their uses. However, none of the above gives me the "feel" that the tests do run inside <a href="http://karaf.apache.org/">Apache Karaf</a>.<br />
<br />
<a href="https://github.com/openengsb/labs-paxexam-karaf">Pax Exam Karaf</a> is a <a href="http://team.ops4j.org/wiki/display/paxexam/Pax+Exam">Pax Exam</a> based framework which allows you to run your tests inside <a href="http://karaf.apache.org/">Apache Karaf</a>. To be more accurate it allows you to run your tests inside <b>any</b> karaf based container. So it can also be <a href="http://servicemix.apache.org/">Apache ServiceMix</a>, <a href="http://fusesource.com/products/enterprise-servicemix/">FuseESB</a> or even a custom runtime that you have created yourself on top of <a href="http://karaf.apache.org/">Apache Karaf</a>.<br />
<br />
As of Karaf 3.0.0 (it's not yet released) it will be shipped as part of Karaf tooling. Till then you can enhoy it at the <a href="https://github.com/openengsb/labs-paxexam-karaf/">paxexam-karaf github repository</a>.<br />
<b><br />
</b><br />
The benefits it offers over traditional approaches is that it allows you to use all Karaf goodness inside your tests:<br />
<ul>
<li>Features Concept</li>
<li>Deployers</li>
<li>Admin Service</li>
<li>Dynamic configuration</li>
<li>more</li>
</ul>
<div>
<b>Setting it up</b><br />
Detailed installation instructions can be found at the <a href="https://github.com/openengsb/labs-paxexam-karaf/wiki">paxexam-karaf wiki</a>.<br />
<br />
The basic idea is that your unit test is added inside a bundle called probe and this bundle is deployed inside the container. The container can be customized, with custom configuration, custom features installation etc so that it fits the "actual" environment.<br />
<br />
<b>Starting a custom container</b><br />
To setup the container of your choiece you can just modify the configuration method. Here is an example that uses <a href="http://servicemix.apache.org/">Apache ServiceMix</a> as a target container:<br />
<br />
<script src="https://gist.github.com/1979041.js">
</script>
<br />
<b>Using the Karaf shell</b><br />
Since one of Karaf's awsome features is its shell, being able to use shell commands inside an integration test is really important.<br />
<b><br />
</b><br />
The first thing that needs to be addressed in order to use the shell inside the probe, is that the probe will import all the required packages. The probe by default uses a dynamic import on all packages, however this will exclude all packages exported with provisional status. To understand what this means you can read more about the <a href="http://felix.apache.org/site/provisional-osgi-api-policy.html">provisional osgi api policy</a>. In our case we need to customize our probe and allow it to import such packages this can be done by adding the following method to our test:<br />
<br />
<script src="https://gist.github.com/1979047.js">
</script>
<br />
Now in order to execute commands you need to get the Command processor from the OSGi service registry and use it to create a session. Below is a method that allows to do in order to execute multiple commands under the same session (this is useful when using stateful commands like config). Moreover this method allows you to set a timeout on the command execution.<br />
<br />
<br />
<script src="https://gist.github.com/1979051.js"> </script>
<br />
<b>Distributed integration tests in Karaf</b></div>
<div>
Working full time on open source integration, my needs often spawn across the boundaries of single JVM. A simplistic example is having one machine sending http requests to an other, having multiple machines exchanging messages through a message broker, or even having gird. The question is how do you test these cases automatically.<br />
<br />
Karaf provides the admin service, which allows you to create and start instances of Karaf that run inside a separate jvm. Using it from inside the integration test you are able to start multiple instances of Karaf and deploy to each instances the features/bundles that are required for your testing scenario.<br />
<br />
<u>Test Scenario:</u><br />
Let's assume that you want test a scenario where you have on jvm that acts as a message broker, one jvm that acts as a camel message producer and one jvm that acts as a camel message consumer. Also let's assume that you don't run vanilla karaf, but <a href="http://fusesource.com/products/enterprise-servicemix/">FuseESB</a> (<i>which is an Enterprise version of ServiceMix, which is powered by Karaf). </i>You can use the admin service from inside your integration test just like this.<br />
<br />
<script src="https://gist.github.com/1979056.js"> </script>
</div>
<br />
You might wonder now how are you supposed to assert that the consumer actually consumed the messages that it was supposed to. There are a lot of things that could be done here:<br />
<br />
<ul>
<li>Connect to consumer node and use camel:route-info</li>
<li>Connect to the consumer and check the logs <i>(if its supposed to log something)</i></li>
<li>Get the jmx stats of the broker</li>
</ul>
<br />
You can literally do whatever you like.<br />
<br />
So here's how it would like if we go with using the camel commands:<br />
<br />
<script src="https://gist.github.com/1979062.js"> </script>
<br />
<b>Pointers to working examples</b><br />
I usually provide sources for the things I blog. In this case I'll prefer to point you to existing OSS projects that I have been working on and are using paxeaxam-karaf for integration testing.<br />
<br />
<b><a href="https://github.com/fusesource/fuse/tree/master/fabric/fabric-itests/fabric-pax-exam">Fuse Fabric integration tests</a>: </b>Contains test that check the installation of features, availability of services, distributed OSGi services, even the creation and use of agents in the cloud.<br />
<br />
<b><a href="https://github.com/jclouds/jclouds-karaf/tree/master/itests">Jclouds-Karaf integration tests</a>: </b>Contains simple tests that verifies that features are installed correctly.<br />
<br />
<b><a href="https://svn.apache.org/repos/asf/karaf/cellar/branches/cellar-2.2.x/itests/">Apache Karaf Cellar integration tests</a>: </b>Contains distributed tests, with quite a few Hazelcast examples.
<script src="https://raw.github.com/moski/gist-Blogger/master/public/gistLoader.js" type="text/javascript"></script>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com24tag:blogger.com,1999:blog-1786615818482917324.post-78380449213402125792012-01-23T11:04:00.000-08:002012-01-24T11:57:28.148-08:00Installing services in the cloud with jclouds<b>Prologue</b><br />
I spent the whole today stuck on an issue with <a href="http://www.jclouds.org/">jclouds</a> and I thought that it would be a good idea to blog about it so that others don't have to spend so many hours on it.<br />
<br />
<b>The issue</b><br />
My target was to write an integration test, that will start a node on Amazon EC2 and install a service that would be used for the integration test. So I created a script that performed a curl to download the tarball of the service unpack the service and run the service. So far so good. The problem I encountered was that my invocations on the method <b><i>runScriptOnNode</i></b> <i>(a <a href="http://www.jclouds.org/">jclouds</a> method for invoking scripts on remote nodes) </i>timed out after waiting for 10 minutes. However, the script only needed 1 minute and was successfully executed.<br />
<br />
<b>Diving into jclouds run script methods</b><br />
After spending some time to make sure that no network issues, like firewalls and such where involved, I decided to examine in depth how the <b><i>runScriptOnNode</i> </b>method works.<br />
<br />
<a href="http://www.jclouds.org/">Jclouds</a> uses an initialization scripts, which installs the target script to the node and invokes it. The initialization script keeps track of the targets script pid and is able to tell if the target script has completed its execution. So the <b>runScirptOnNode </b>will block for as long as the initialization scripts replies that the target script is running.<br />
<br />
<b>Where's the catch?</b><br />
The initialization script keeps track of the target scripts PID by executing <b><i>findPid</i></b> which is ps and grep using a pattern which matches the execution path. That's not a problem by itself, but if you install your service and run it inside the same folder, then initialization script will get confused and won't be able to tell when if the target script finished its execution. As a result the <b><i>runScirptOnNode</i> </b>method will block till it times out.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKJatavCz9vR3pj30m6HcZOGJ_509FRVtwWUDpaRQaAgpZcOlSHbcBWEr6C7bdi64Q18kRsIEFP5PKCjrMwxB1Wt_Th14SN_5gbJcbep28mkZulF0Ckg7umFpN5GT0I5AWzIysNT2wgso/s1600/runscript-service.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKJatavCz9vR3pj30m6HcZOGJ_509FRVtwWUDpaRQaAgpZcOlSHbcBWEr6C7bdi64Q18kRsIEFP5PKCjrMwxB1Wt_Th14SN_5gbJcbep28mkZulF0Ckg7umFpN5GT0I5AWzIysNT2wgso/s400/runscript-service.jpg" width="400" /></a></div>The figure above displays a setup that can have problems. In this setup the init script will query the status of the target script by performing a ps and using the <i>jc1234 </i>to filter out processes. However, if a new process is started under that folder <i>(by the target script)</i>, say folder service, then the init script will not be able to properly detect when target script finished. That's because the <b><i>findPid</i></b> will now return the pid of the service.<br />
<br />
<br />
<b>Lessons learned</b><br />
Never start a service inside the same folder where the target script is executed, make sure you unpack and run your service from inside an other folder. Even better use a framework for installing the service <i>(e.g. <a href="http://whirr.apache.org/">Apache Whirr</a>) </i>for installing mainstream service and only put your fingers on it if you really have to.<br />
<br />
<br />
<b><br />
</b>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com0tag:blogger.com,1999:blog-1786615818482917324.post-66009989140857917912011-11-05T09:49:00.000-07:002012-03-05T08:59:16.752-08:00Cloud Notifications with Apache Camel<b>Prologue</b><br />
Yesterday I was having a talk with <a href="http://www.linkedin.com/profile/view?id=17169493">Adrian Cole</a> and during our talk he had an unpleasant surprise. He found out that he forgot a node running on his Amazon EC2 for a couple of days and that it would cost him a several bucks.<br />
<br />
This morning I was thinking about his problem and I was thinking of ways that would help you avoid situations like this.<br />
<br />
My idea was to build a simple project that would notify you of your running nodes in the cloud via email at a given interval.<br />
<br />
This post is about building such as solution with <a href="http://camel.apache.org/">Apache Camel</a>, which help you integrate very easily with both your cloud provider and of course your emai. The full story and the sources of this project can be found below.<br />
<br />
<b>Working with recurring tasks</b><br />
<b><a href="http://camel.apache.org/">Apache Camel</a> </b>provides a quartz component, which will allow you schedule a task with a given interval.<br />
It is really simple to use. In our case a one hour interval sounds great. Also we want an unlimited time of executions <i>(repeatCount=-1)</i> so it could be something like this.<br />
<script src="https://gist.github.com/1979232.js"> </script>
<br />
<b>Using Camel to integrate to your Cloud provider</b><br />
Camel 2.9.0 will provide a <a href="http://www.jclouds.org/"><b>jclouds</b></a> component, which will allow you to use jclouds, to integrate with most cloud key/value engines & compute services. I am going to use this component, to connect to my cloud provider <i>(I will use my EC2 account, but it would work with most cloud providers)</i><br />
<br />
My first task is to create a jclouds compute service and pass it to the Camel jclouds component. This will allow me to use jclouds inside my camel routes.<br />
<br />
<script src="https://gist.github.com/1979244.js"> </script>
<br />
To avoid providing my real credentials I've used property place holders and keep the real credentials in a properties file.<br />
<br />
Now that the component is configured I am ready to define my route. The route will use Camel jclouds compute producer to send a request to my cloud provider and ask how many nodes are currently running. This query can be further enhanced with other parameters such as group <i>(get me all the running nodes of group X) </i>or even image <i>(get me all the running nodes of group X that use image Y). </i><br />
<br />
All I have to do is add the following element to my route.<br />
<br />
<br />
<script src="https://gist.github.com/1979249.js"> </script>
<br />
The out message will contain a set in each body with all the metada of the running nodes.<br />
<br />
<b>Filtering the results</b><br />
I don't want to fire an email every time I ask my cloud provider about the running nodes, but only when there is actually a running node. The best way to do so is to use the <a href="http://camel.apache.org/message-filter.html">Message Filter</a> EIP pattern. I am going to use that in order to filter out all messages that have a body which contains an empty set.<br />
<br />
<script src="https://gist.github.com/1979256.js"> </script>
<br />
<b>Sending the email</b><br />
This is the easiest part, since the only thing I need to specify are the sender, the target & the subject of the email. I can do it simply but adding headers to the message. Finally I need to specify the smtp server and the credentials required for using it.<br />
<br />
<script src="https://gist.github.com/1979265.js"> </script>
<br />
Now all we need to do is set the destination endpoint inside the message filer.<br />
<br />
<script src="https://gist.github.com/1979271.js"> </script><br />
<b>Running the example</b><br />
The full source of this example can be found at <a href="https://github.com/iocanel/blog">github</a>. The project is called cloud notifier.<br />
You will have to edit the property file camel.properties in order to add the credentials for your cloud provider and email account.<br />
In order to run it all you need to do is type <i>mvn camel:run</i>.<br />
<br />
If you have a couple of nodes running, the result will look like this.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixB-ykZwTUK7Dl_JJ3Ah3HD-lwtS7GyONl-MrPtMfWODsbz-bszXkLLzSgAHd_R1HW-pCdA93aBIMF5_MH554342R4lR0uwRSw7apeszogqwtH1p7lSr5zyp2N0ycgHk1nhbqEvKY5LEE/s1600/mail.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="29" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixB-ykZwTUK7Dl_JJ3Ah3HD-lwtS7GyONl-MrPtMfWODsbz-bszXkLLzSgAHd_R1HW-pCdA93aBIMF5_MH554342R4lR0uwRSw7apeszogqwtH1p7lSr5zyp2N0ycgHk1nhbqEvKY5LEE/s320/mail.png" width="320" /></a></div><br />
<br />
<br />
<br />
<br />
Enjoy!<br />
<br />
<b>Conclusions</b><br />
The <a href="http://camel.apache.org/jclouds.html"><b>camel-jclouds</b></a> component is really new, it will be part of 2.9.0 releasem however it already provides some really cool features. It also provides the ability to create/destroy or run scripts on your nodes from camel routes. Also it leverages jclouds blobstore API in order to integrate with cloud provider key value engines <i>(e.g. Amazon S3)</i><br />
Can you imagine executing commands in the cloud using your mobile phone and sms message? <i>(Camel also supports protocols for exchanging sms)</i>.<br />
<br />
I hope you find all these really useful.<br />
<br />
Edit: <i>While I was writing this simple app, to my surprise I found out a forgotten instance myself!</i>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com9tag:blogger.com,1999:blog-1786615818482917324.post-61241099332770990712011-10-10T09:15:00.000-07:002011-10-22T03:46:59.488-07:00My JavaOne talk about Cellar<b>Prologue</b><br />
I am currently returning home from JavaOne 2011. I am at the airport of Munich waiting for my connecting flight to Athens. Once again the flight my flight is delayed and its a great chance to blog a bit about JavaOne.<br />
<br />
<b>Apache Karaf Cellar at JavaOne 2011</b><br />
I had the chance to make a BOF about Karaf Cellar last Tuesday night. Even though the presentation was really late <i>(20:30) </i>and there were a lot of parties going on at this time <i>(actually I was at the Jboss party right before my presentation)</i> there were quite a few people that attended. The best part was that most of the people who attended were really eager to hear about Karaf & Cellar and I received a lot of great <i>"straight to the point"</i> questions. So I really enjoyed the talk and had a lot of fun.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkdpqE30S_S0SZxYqEEvE0oHVNafIHUrvxYMTeBVL_MUxE0uZj51tqPcdBeD4o1BTfIW1l5gx2qv-N4Fs0RPLBaPP7Nk5JQRpW0hZYDx6jUGb3B2Mh_fssXtrpzn8VtUFJ6ixM-llSnb4/s1600/IMG_20111004_201046.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkdpqE30S_S0SZxYqEEvE0oHVNafIHUrvxYMTeBVL_MUxE0uZj51tqPcdBeD4o1BTfIW1l5gx2qv-N4Fs0RPLBaPP7Nk5JQRpW0hZYDx6jUGb3B2Mh_fssXtrpzn8VtUFJ6ixM-llSnb4/s320/IMG_20111004_201046.jpg" width="240" /></a></div><br />
I was worried that I would be really nervous, since I am not that used at public speaking, but I think the drinks I had in the Jboss party did the trick.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1U9vGhR70-OKi7eQBJ4FrVOxkGDEY2f5DWciZJPBhpJUva5q2LQ_oVITn8ppS6-H2jtw524NvVz0fO3hoAIAvr2fUwCPtT3Ap_sL2wr-XdvCqNi-GqRXTg9xFOuJBcEyw8ke5N1AVa38/s1600/P1060706.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1U9vGhR70-OKi7eQBJ4FrVOxkGDEY2f5DWciZJPBhpJUva5q2LQ_oVITn8ppS6-H2jtw524NvVz0fO3hoAIAvr2fUwCPtT3Ap_sL2wr-XdvCqNi-GqRXTg9xFOuJBcEyw8ke5N1AVa38/s320/P1060706.JPG" width="320" /></a></div><br />
<br />
<b>After the talk</b><br />
Right after the talk I had the chance to have a few more drinks with <a href="http://www.linkedin.com/profile/view?id=63937546">Marios Trivizas</a>, Chris Soulios, <a href="http://www.linkedin.com/profile/view?id=17169493">Adrian Cole</a>, <a href="http://www.linkedin.com/profile/view?id=76859833">Chas Emerick</a> & Toni Batchelli.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIHah2WTE4JsrmNsVkWw2vJAAjkS-U1wLOJakwG6ScY9KbmR8ghaPcMeFs2LBds1A_rq8Yl5pstEmK_WEa-X63Ac4pbQmleZs3NtMNobczgY8f4v4-N2_JEdO6QlJ74ZcdIeh1K_uPM0Q/s1600/415514034.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIHah2WTE4JsrmNsVkWw2vJAAjkS-U1wLOJakwG6ScY9KbmR8ghaPcMeFs2LBds1A_rq8Yl5pstEmK_WEa-X63Ac4pbQmleZs3NtMNobczgY8f4v4-N2_JEdO6QlJ74ZcdIeh1K_uPM0Q/s320/415514034.jpg" width="320" /></a></div><br />
<br />
<b>The FuseSource Booth</b><br />
Apart from talking and attending other sessions at JavaOne, I also had the chance to spent a lot of time at the booth of <a href="http://fusesource.com/">FuseSource</a>. Great chance to meet with people enjoying our services and also to talk with people interested in learning more about <a href="http://fusesource.com/products/">FuseSource Products</a> & <a href="http://fusesource.com/community/fuse-customers/">FuseSource success stories</a>.<br />
<br />
<b>"There is no place like home"</b><br />
Well, actually there is and is called San Francisco, but now I am back home & ready to dive into open source. I hope I'll have the chance to be there next year.Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com2tag:blogger.com,1999:blog-1786615818482917324.post-13780358108775215512011-06-05T12:14:00.000-07:002011-06-05T12:15:31.512-07:00JClouds & OSGi<b>OSGi in the clouds</b><br />
The last couple of years OSGi and Cloud Computing are two buzz words, that you don't see go hand in hand that often. <a href="http://code.google.com/p/jclouds/">JClouds</a> is going to change that, since 1.0.0 release is OSGi ready and it also provide direct integration with <a href="http://karaf.apache.org/">Apache Karaf</a>.<br />
<br />
<b>JClouds in the Karaf</b><br />
The last couple of weeks I have been working with the jclouds team in order to improve the <u><b>OSGi</b><i>fication</i></u> of jclouds and also to provide integration with <a href="http://karaf.apache.org/">Apache Karaf</a>.<br />
<br />
I will not go into much detail in this post, since there is a <a href="http://code.google.com/p/jclouds/wiki/Karaf?ts=1307270442&updated=Karaf">wiki</a>. I will add however a small demo that shows how easy it is.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/SIvSaGEKrkM?feature=player_embedded' frameborder='0'></iframe></div><br />
<br />
<b>A Cloud, a Karaf and a Camel</b><br />
The fact that JClouds is now OSGi ready opens up new horizons. <a href="http://camel.apache.org/">Apache Camel</a> is one of them. I have been working on a Camel Component that leverages JClouds blobstore abstraction, in order to provide blobstore consumers and producers via Apache Camel.<br />
<br />
Hopefully, abstractions for Queues and Tables will follow...<br />
<br />
You can find it and give it a try on my <a href="https://github.com/iocanel/camel-jclouds">github</a> repository.<br />
<br />
<b><br />
</b><br />
<b><br />
</b>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com2tag:blogger.com,1999:blog-1786615818482917324.post-81031058649058705652011-05-07T01:47:00.000-07:002011-05-07T02:03:05.656-07:00Apache Karaf Cellar<b>Prologue</b><br />
In some previous blog <a href="http://iocanel.blogspot.com/2011/03/karaf-clustering-using-hazelcast.html">post</a>, I designed and implemented Cellar <i>(a small clustering engine for <a href="http://karaf.apache.org/">Apache Karaf</a> powered by <a href="http://www.hazelcast.com/">Hazelcast</a>)</i>. Since then Cellar grew in features and eventually was accepted inside Karaf as a subproject.<br />
<br />
This post will provide a brief description of Cellar as it is today.<br />
<br />
<b>Cellar Overview</b><br />
Cellar is designed so that it can provide Karaf the following high level features<br />
<br />
<ul><li><b>Discovery</b></li>
<ul><ul><li><i>Multicast </i></li>
<li><i>Unicast</i></li>
</ul></ul><li><b>Cluster Group Management</b></li>
<ul><ul><li><i>Node Grouping</i></li>
</ul></ul><li><b>Distributed Configuration Admin</b></li>
<ul><ul><li><i>per Group distributed configuration data</i></li>
<li><i>event driven distributed / local bridge</i></li>
</ul></ul><li><b>Distributed Features Service</b><ul><ul><li><i>per Group distributed features/repos info</i></li>
<li><i>event driven distributed / local bridg</i>e</li>
</ul></ul></li>
<li><b>Provisioning Tools</b></li>
<ul><ul><li><i>Shell commands for cluster provisioning</i></li>
</ul></ul></ul><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">The core concept behind cellar is that each node can be a part of one ore more groups, that provide the node distributed memory for keeping data <i>(e.g. configuration, features information, other) </i>and a topic which is used to exchange events with the rest group members.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4pSNb2Hc0wFSjlvQJaXpg_wvWIytfJQC-qP8S49AZM-Q-xvzbwXYGdXo8QM_6sR_HWZuWdTM7AkeVnjAdGLYjFKzhbj95-uy1dgAwb_Lm46hxR0j7lLwc2MAfSGiath_pvAN9e0LD_Ts/s1600/architecture.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4pSNb2Hc0wFSjlvQJaXpg_wvWIytfJQC-qP8S49AZM-Q-xvzbwXYGdXo8QM_6sR_HWZuWdTM7AkeVnjAdGLYjFKzhbj95-uy1dgAwb_Lm46hxR0j7lLwc2MAfSGiath_pvAN9e0LD_Ts/s320/architecture.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Each group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that it belongs to and broadcast the event to the groups that whitelist the specific event. </div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">The broadcast operation is happening via the distributed topic provided by the group. For the groups that the broadcast is supported, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Supported Events</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">There are 3 types of events:</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><ul><li>Configuration change event</li>
<li>Features repository added/removed event.</li>
<li>Features installed/unistalled event.</li>
</ul><div>For each of the event types above a group may be configured to enabled synchronization, and to provide a whitelist / blacklist of specific event ids.</div><div><br />
</div><div><u>Example</u></div><div>The default group is configured allow synchronization of configuration. This means that whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed memory of the default group and will also be broadcasted to all other default group members using the topic.</div><div><br />
</div><div>This is happening for all PIDs but org.apache.karaf.cellar.node which is marked as blacklisted and will never be written or read from the distributed memory, nor will broadcasted via the topic. </div><div><br />
</div><div>Should the user decide, he can add/remove any PID he wishes to the whitelist/blacklist.</div><div><br />
</div><div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Syncing vs Provisioning</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Syncing (changing stuff to one node and broadcast the event to all other nodes of the group) is one way of managing the cellar cluster, but its not the only way.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">Cellar also provides a lot of provisioning capabilities. It provides tools <i>(mostly via command line)</i>, which allow the user to build a detailed profile (configuration and features) for each group.</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b>Cellar in action</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">To see how all of the things described so far in action, you can have a look at the following 5 minute cellar demo:</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/HfNrTp371LA?feature=player_embedded' frameborder='0'></iframe></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b><br />
</b></div><br />
<u>Note:</u> The video was shoot before Cellar adoption by Karaf, so the feature url, configuration PIDs are out of date, but the core functionality is fine.<br />
<br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">I hope you enjoy it!</div></div><div><br />
</div><div><br />
</div><div><br />
</div>Anonymoushttp://www.blogger.com/profile/12809065648292689334noreply@blogger.com10