Running JBoss 7 on Windows Azure — Part II

Continuing on where I left it on my previous post, I’m going to explain how the Announcement service works and why we choose that approach.

The way JBoss and mod_proxy work now is that every time something changes in the topology, either a new proxy is added or removed or a JBoss node, then the proxy list has to be updated and both the node and the proxy have to be aware of their existence.

Mod_proxy is using multicast to announce itself to the cluster but as this is not supported on Windows Azure, we created our own service that runs on the proxy and on the node also. Each time a new proxy or a node is added/removed, the service notifies the rest of the instances that something changed in the topology and they should update their lists with the new record.

The service is not running under a dedicated WorkerRole but it’s part of the same deployment as the proxy and the JBoss node. It’s a WCF service hosted inside a Windows NT Service listening on a dedicated port. That approach gives us greater flexibility as we keep a clear separation of concern between the services on the deployment and we don’t mix code and logic that has to do with the proxy, with the Announcement service. Originally the approach of using an NT Service caused some concerns as how this service is going to be installed on the machines and how can we keep one single code base for that service, running on both scenarios.

First of all, you should be aware that any port you open through your configuration is only available to the host process of the Role. That means if the port is not explicitly open again on the firewall, your service won’t be able to communicate as the port it’s blocked. After we realized that, we fix it by adding an extra line to our Startup Task which was installing the service on the machines. The command looks like this:

which is part of the installer startup task

To make our service even more robust and secure we introduced a couple of NetworkRules that they only allow communication between Proxies and Jboss nodes:

Any kind of communication between the services is secured by certificate based authentication and message level encryption. The service it’s a vital component in our approach and we want it to be as secure as possible.

The service is monitoring a couple of things that helps us also collect telemetry data from the Jboss nodes, but it’s also wired to a couple of RoleEnvironment events like OnStopping and OnChanged. Everytime there is an OnStopping, we send messages out to all of the other service instances to de-register that proxy from their list because it’s going down. Also, the service itself is checking on specific intervals if the others nodes are alive. If they don’t respond after 3 times, they are removed. The reason we do this, is to handle possible crashes of the proxy as fast as possible. Lastly, everytime there is an OnChanged event fired, we verify that everything is as we know they should be (nodes available etc).

Next post in the series, the cluster setup.

PK

Running JBoss 7 on Windows Azure

I’m going to start a series of posts to explain how we made JBoss run on Windows Azure, not just on standalone mode but with full cluster support.

Let me start with one simple definition, I’m NOT a Java guy, but I work with some very talented people under the same roof and under the same practice at Devoteam.

So everything started when I posted on our internal collaboration platform a post about the new Windows Azure Starter Kit CTP being released for Java. Since my very beginning at Devoteam I’ve been brainwashing them to try it out, so that post was the kick-off they needed. Knowing our customer base, which is highly mixed and hybrid environments with Java and .NET and SAP systems etc etc, we want to create this:

“An integrated, enterprise-grade demo of a Windows Azure cloud setup, containing a .NET front-end, a JEE application in the cloud and a local mainframe instance; completely integrated.”

We used:

  • mod_cluster
  • jGroups
  • mod_proxy

The basic reason behind choosing JBoss, besides the fact that our customers use it also, was that it’s open and free and open means that we can change whatever we want to make it work and fit in our environment.

One of those talented guys Francois Hertay, modified the code for cluster discovery (jGroups) already provided to make it work in a more robust way and more important, make it work with JBoss 7 because it currenctly works only with version 6. We still use the BLOB approach but we changed it a little bit to make it more robust. In a typical enterprise scenario we have the proxies in front of the JBoss cluster and mod_proxy is also the one achieving the much needed state consistency as you might already know that Windows Azure is using a non-sticky load balancer. Also, based on the dynamic nature of how a Windows Azure instance behaves, it was impossible to have static IPs for the proxies and the instances and it was obvious we needed a little something for:

  • Discovering the proxies and announce them to the JBoss cluster
  • Make sure that this is removed when a proxy goes down or when a new node joins the cluster, it finds the proxies and registers them

We needed something different as mod_proxy uses multicast to announce itself to the cluster and this is not supported in Windows Azure. The solution was to create our own home-brewed announcer service and will take care of this.

Our final setup was 1 WorkerRole for the Proxy and 1 WorkerRole for the JBoss node.

We choose this setup so we can independently scale either the proxies or the JBoss nodes, which is pretty typical in an Enterprise environment.

On the next post, I will explain how the Announcement Service works and what can be improved in the future in the service.

‘Till then,

PK.

Go to top