Part 1: Azure App Service API Apps in more detail and some FAQ

What is Azure App Service?

In case you missed it, on the 24th we announced Azure App Service. In simple words, it’s a common platform were you can develop Web, Mobile, Logic and API Apps. It’s based on proven technologies and more specifically WebSites, which is now called Web Apps. The fact that there is a common infrastructure below gives us a lot of benefits like being able to scale apps together, or independently, sharing resources, being able to seamlessly integrate the apps together and many more.


Building API Apps

One of the new types of Apps you can build are API Apps. You might wonder, couldn’t I just have a Web App and put an API there? The answer is yes, but API Apps has more to it than just hosting. When you choose to build an API App some of the advantages are:

  1. We can auto-generate the code needed for that API to be consumed by your apps
  2. You can use your API Apps in Logic Apps
  3. The API App can be listed in the galleries (public or private, including the marketplace) (not currently available in Preview)
  4. API Apps can be auto-updated (not currently available in Preview)

The API App can be of any technology that a Web App can be, meaning .NET, Node.JS, PHP, Python, Java etc. When you build and deploy an API App what happens behind the scenes is that we create a Web App for you and we put some special metadata around it to know that is an API App. The API App also needs a special file in the root, called apiapp.json which contains the API App related metadata like author, summary, id and many more that are not currently used in the preview but they will light up in the future. You can find more information at this page. Towards the middle of the page there is a detailed description of the all the properties you can add to the apiapp.json file.


To be able to start building the apps, make sure you get the latest Azure SDK. Currently the Azure App Services SDK only works with Visual Studio 2013. Once you download the SDK and install it you have two options: migrate an existing Web API or create a new one.

How does it work?

The power of the API Apps comes from the easiness to consume and discover them. Our long term vision is to enable anything from regular REST APIs to even OData APIs to run on the platform and benefit from it. To be able to do that, we need a way to describe what those APIs are capable of and have a extensible model that expresses their capabilities. That is happening by using Swagger 2.0. For the API App to light up properly you need to have a Swagger 2.0 Metadata endpoint where our platform will reach out and discover the capabilities of your API, effectively find the API Definition. The Swagger 2.0 metadata are only required for that reason. Your customers, you or whoever wants to consume the API doesn’t need any special tooling or anything so there is no coupling here. In case you use our Azure API App Client, you can then point our tooling to your API App and we will generate the code for you. The API App client generation works with any valid Swagger 2.0 metadata file, so even if the API App is not running, as long as you have the Swagger 2.0 metadata file, we can generate the code for you. Then the code is yours and you can extend it as much as you like as it’s partial classes.

Azure App Service API App architecture overview

Once you create an API App you might want to deploy it to your subscription. When you are about to deploy the app, you’ll be asked a couple of things:

  1. App Service Plan: This describes what kind of capabilities the underlying infrastructure will have (e.g. pricing tier, how many VMs are goin to host your app etc.)
  2. Resource Group: This is a separation unit of your resources. It helps when you want to separate the resources available to different API Apps or even isolate the discovery of API Apps all together. Logic Apps within a resource group can only discover other API Apps within that resource group. It’s also very easy to manage the resource like that as you can see all the dependencies and such.
  3. Access Level: Who is able to access your API? Options can be Public Anonymous, Internal and Public Authenticated.
  4. Region: The region you want your API App to be deployed.


After the deployment finishes, you will notice that within your resource group there is another App called “Gateway”. Conceptually, all the API Apps sit behind the gateway. The Gateway handles things like API Management, Authentication and more in the future. When you try to access an API App there is communication with the gateway which in case of an authenticated call, it also flows the token to the API App being accessed. The Gateway also contains metadata of the API Apps like their definition, name, version of the package and more.

Implementing a microservice architecture

As you can imagine you can create multiple, independent API Apps that communicate with each other, have their own persistence layer, share authentication and much more. You can actually implement a microservice architecture like that. There is no precise definition right now, but the idea is having lightweight services that are structured around business capabilities, have automated deployments, intelligence in the endpoints and be decentralized from languages and data (by Martin Fowler). The protocol of choice can be HTTP but that’s not absolutely necessary. In our case though, that’s how API Apps work.


  • Am I tightly coupled to the service if I generate code for an API App? Is this another “Add Service Reference…” thing?
    • No, you’re not and no it’s not. The Swagger 2.0 metadata are used to describe the capabilities of the API. When you generate the code using the Azure API App Client, we’re not adding any reference or anything. We’re just generating the code you would write anyway to access the API. If you don’t have to generate the code if you don’t want to. You can consume the API as any other API out there, by writing manual HttpClient code and JSON serialization
  • How can I update my API App?
    • Before the galleries and the packaging is released, all you have to do is do a Publish (Deploy) as you would do for a Web App. That’s it. The experience will stay as easy once the galleries are out as well.
  • If my API is accessed rarely but I still need quick responses, how do I do that?
    • If you remember, the underlying container for the API App is a Web App. The benefits of using a common infrastructure is that you get a common set of capabilities. That means, you can go to the Settings of your API Apps host and enable the Always On feature. Open the blade of the API App on the preview portal and look for the API App Host setting. Click it and that should get you to the API App Host. There, click on Settings->Application Settings and find the Always On option and switch it to On on the blade that will open. Click Save and you’re done.

What’s next?

This is a series of blog posts explaining API Apps and Logic Apps. My next blog post will be explaining how you can run a Node.js App as an API App.

In the meantime download the bits, start playing with the platform for free and stay tuned!

Feel free to reach out to me if you have any other questions.


My blog is not dead, I promise.

It’s not dead.

I promise.

I haven’t written a single thing for a loo<chuchu_train>oooooooooo</chuchu_train>ng time. It was for a good reason. So what happened? Well:

  1. I joined Microsoft. Which means I had to drop my MVP status.
  2. I’m a Technical Evangelist, writing code, spreading knowledge.
  3. I moved to Vancouver, BC to be closer to my team.
  4. My beautiful wife followed me.

In between those steps, there were a lot of talks at TechEd’s, TechDays etc. I’ve been all around Europe and a bit of US.

I told you, my blog is not dead.

More is coming soon.


VMDepot is now integrated into the Windows Azure Portal

A nice change I noticed today is that VMDepot images are now visible in the Windows Azure Portal.

If you go to Virtual Machines


You’ll see an option that says “Browse VMDepot”


If you click it, you get the list of the image already in the VM Depot:



You can select one and create a virtual machine based on that image, just like that! :)

The coolest of all is that you can create your own images, publish them to VM Depot and if they get accepted, they get visible in the portal as well.

Small addition but a lot of value comes out of it!



Windows Azure and VM Depot from MS Open Tech

VM Depot allows users to build, deploy and share their favorite Linux configuration, create custom open source stacks, work with others and build new architectures for the cloud that leverage the openness and flexibility of the Azure platform.

How it works

All images shared via this catalog are open for interaction where other users of VM Depot can leave comments, ratings, and even remix images to developers’ liking and share the results with other members of the community.  Currently, on catalogs similar to VM Depot (such as Amazon Web Services),users must pay to publish all versions of their images. With VM Depot, publishing an image is free to encourage users to take full advantage of shared insights and experience, as well as encourage collaboration towards the best possible version of an image. The shared community environment also means users can access this catalog to use basic image frameworks created by others so they can quickly continue building without starting from scratch.

VM Depot was made possible with the support of a number of partners who have contributed images and packages for this preview launch including Alt Linux, Basho, Bitnami and Hupstream.


Q: Why has Microsoft Open Technologies Developed VM Depot?

A: VM Depot has been developed to help developers and IT pros take advantage of the open capabilities of the Windows Azure platform. VM Depot is a community-driven catalog of open source virtual machine images for Windows Azure. Using VM Depot, the community can build, deploy and share their favorite Linux configuration, create custom open source stacks, and work with others to build new architectures for the cloud that leverage the openness and flexibility of the Windows Azure platform.

Some links:

Port 25 blog

Interoperability@Microsoft blog

More info:

VM Depot delivers more options for users bringing their custom Linux images to Windows Azure Virtual Machines

VM Depot from MS Open Tech


Speaking at Windows AzureConf

What is Windows AzureConf

On November 14, 2012, Microsoft will be hosting Windows AzureConf, a free event for the Windows Azure community. This event will feature a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. Streamed live for an online audience on Channel 9, the event will allow you to see how developers just like you are using Windows Azure to develop applications on the best cloud platform in the industry. Community members from all over the world will join Scott in the Channel 9 studios to present their own inventions and experiences. Whether you’re just learning Windows Azure or you’ve already achieved success on the platform, you won’t want to miss this special event. (Source:

Why register?

First of all, it’s free! But that’s not the real value of it. The real value is the opportunity to watch and learn from real-world examples of how Windows Azure was used to real-world applications. The presentations/sessions are going to be delivered by community members, the same people who worked on those applications.

My session

I will be at the Channel9 studios in Redmond as my session (Windows Azure + Twilio == A Happy Tale to Tell) was selected  to be one of them and I would love to see you online and answer your questions.

Register now at

Windows Azure New Portal Improvements

I accessed the portal today and a nice surprise was waiting for me. Among other improvements that I either didn’t spot yet or they are not visible to us (backend changes) they were two new very welcome changes:

1) Service bus can now be managed from the new portal!

Under the “App Services” category, you can now find the new options to create a new “Queue”, “Topic” or “Relay” service point.

In case you select “Custom create” you can directly create the namespace from the wizard instead of having first to create the namespace and then add entities (services) to it.

You can also track information regarding your namespace’s Queues, Topics or Relays directly from the portal. That information could be e.g. size, transactions etc.

Last but not least, you can retrieve the connection string from the portal. No more mistakes or looking for it when needed.

2) You can now manage the users (co-admin) of your subscriptions directly from the “Settings” menu and the new portal. It used to be that you had to switch to the old Silverlight portal to execute this task.


Very few services now missing from the new portal but I love how the team is getting this better every single day. And they do listen to feedback so make sure you send yours!



How to run SugarCRM on Windows Azure Web Sites using WebMatrix 2

Windows Azure Web Sites (WAWS) is a powerful hosting platform provided by Microsoft on Windows Azure. One of the coolest features is that you can run your Web Sites/Web Apps pretty easily and it’s not limited to ASP.NET but it also supports PHP. Also on database level it’s not only SQL databases (MSSQL) that are supported but MySQL as well.

Creating the Web Site

On my previous post on how to migrate your WordPress blog to WAWS I’ve demonstrated some options you have when you create a new Web Site

In our case we just want one that has a database which we’re going to use to migrate our SugarCRM installation. So select “Create with Database”.

In the second step we choose the name and we agree to the terms. Now, in the preview there is a limitation of 1 MySQL database per account, so if you already have a DB like I did, you’ll get a notification and you won’t be able to go forward. Just select and existing MySQL database on the previous step, instead of “Create a new MySQL Database”.

Once this is finished, WAWS will provision your web site and in the next 5-10 seconds you’ll have your web site ready to publish your files to it.

Using WebMatrix 2 to create the SugarCRM installation

If you’re familiar with WebMatrix 2 then you’ll know that creating/installing and running a number of Web Apps it’s a matter of following a Wizard and choosing the correct options. WebMatrix takes care of the rest like creating the hosting locally, downloading dependencies, installing and configuring them and even publishing/migrating your data and files when you upload them to your hoster (with WebDeploy). In my case, I’ve created a new SugarCRM installation from the Application Gallery:

Finish the wizard and you’ll end up with a brand new and working installation of SugarCRM. If you don’t have MySQL installed, it will install it for your and configure it. Also, make sure you download MySQL Workbench because we’ll need it later to connect and inspect the database but also execute the migration of the local MySQL database to the one created on the cloud.

Uploading the website using WebDeploy

Once you finish the local installation of your SugarCRM and you get the login screen, then you’re ready to publish your web site to WAWS. There are two ways to do that, either you can use an FTP client like FileZilla and use the credentials from the portal:

We use WebMatrix, so there is an even easier way to publish them by using WebDeploy. You can download your Publish settings and Import them on your WebMatrix installation. To download your settings, go the Portal, click on your website and then choose “Download Publish Profile”:

Once you do this, you can import your profile to WebMatrix and use it to Publish your website:

Now, you can go ahead and publish your website to WAWS. If everything goes ok, WebMatrix should update your config.php with the correct connection settings imported from your publish profile. In case this doesn’t happen, you have to update your config.php file from SugarCRM with the correct connection settings to MySQL.

Uploading/migrating your database

WebMatrix will prompt you to upload your database along with the publishing of the files but this is not going to work as the local database name doesn’t match the database name of the one created by WAWS and you don’t have permission to create a new one anyway. I was being lazy and as I didn’t want to install anything more than the MySQL Workbench on my machine, I exported the data from my local MySQL database, then use Notepad++ to replace the database name to all of the files exported and then used MySQL Workbench again to import them to my WAWS MySQL database. Below step by step:


Then I renamed the schema by using Notepad++. What I did is “Find/Replace” a string on the dump file and replace the database name. Then the last step was import that to the WAWS MySQL Database:

It still says “sugarcrm143″ which is my local database name but the file uses the WAWS MySQL database name, so you can safely ignore that.

Again, as I said I was being lazy. What you could do is basically export the files using the mysql CLI and then import it directly to the database you want.

You’re done!

Well, that was pretty much it. You uploaded your SugarCRM installation, you uploaded your database data (I used demo data in mine) and your website is now hosted in WAWS with all the benefits that it has, like scaling up or down on the shared environment or even running it on dedicated instances. All, with just use a slider on the portal! You can access my installation at It might be that the first time you try to access it, it will take a bit longer (8-10 seconds) if the web site is sleeping but then the responses are sub-second.

PS. I’ve decided to take the Web Site off as it has been running for a month now and it was only for the purpose of the blog. Here you can find a screenshot of the website running.

Comments, suggestions or corrections are more than welcome!

Thank you,





How to migrate and run your WordPress blog on Windows Azure WebSites

I was testing Windows Azure Web Sites (Codename Antares, for short WAWS) for about 3 weeks before it was announced two days ago at MeetWindowsAzure. Since the beginning I wanted to migrate my blog from to Windows Azure Web Sites so I can have more control (on the blog) but enough abstraction (hosting etc.). Sound cool, right? Well WAWS does exactly that. Follow the process below to create your own web site to host the WordPress engine and also assign and use a custom domain on the environment. I’ll also include some description about the migration steps I did from to WAWS, like export/import of blog posts and installing some plugins to achieve similar functionality with

Creating the Web Site

Creating your Web Site it’s really easy and fast on Windows Azure Web Sites.

As I want to use a WordPress engine powered blog, I can choose “From Gallery” and get a list of ready-to-use Web Sites

All you have to do after that is give a name to it and let Windows Azure Web Sites do the rest for you, like create a MySQL Database, install WordPress and configure it for you. The whole process takes 10-12 seconds. After the provisioning of the Web Site, you can Browse to it. In our case you get the setup page of WordPress asking for an admin username and a password. Fill that information and of you go, your blog is ready.

Migrating existing content

WordPress is really helpful in this case as it has a special plugin only for that reason. By going to Tools -> Export on your existing WordPress blog, like mine hosted on, you can export to an XML file containing references for the Import engine of WordPress. Once, you logon to your Windows Azure Web Sites admin panel, you can navigate to Tools -> Import and then select the file you exported from your previous blog. You can then proceed with the import of all your content to your new blog. Mine took about 2-3 minutes to import everything.

Missing plugins/Similar functionality to

If your blog was hosted on there is some functionality that doesn’t exist on the installation we just did on Windows Azure Web Sites because the plugins are not included. Site stats for example or a couple of Widgets are not there. To include all that functionality WordPress has released a plugin called “JetPack” which includes all that functionality for your hosted WordPress site. All you need is an existing account to link your blog with the WordPress Cloud as they call it.

Creating the DNS records

You need to create the necessary DNS records for your site in order to point to the WAWS site. WAWS assigns to you an address of the <your_name> format. For example this blog is running at but it’s accessible from I’m hosting my DNS records with GoDaddy so they look like below.

My www is actually an alias (CNAME) to the WAWS address.

Making Windows Azure Web Sites understand your custom domain

After you create the CNAME record and your site now points to the WAWS address, you need to make WAWS understand about your custom domain, otherwise sites running e.g. on WordPress won’t be accessible on the custom domain. To achieve this we have to logon to the Windows Azure Portal and go your Web Site. After you do that you have to click on the “Scale” tab and switch from Shared to Reserved mode.

That way you enable the option to use a custom domain header on your Web Site. Next step is click on the “Configure” tab and insert your custom hostname.

Final Steps

One last step left is making your WordPress site, listen and respond to the custom domain. In order to do that, you have to logon your Administration panel, click “Settings” and then “General”. There you have two options the Site Url and the WordPress URL. Make them both listen on your custom domain just like I did.

Now your Web site is accessible from your custom domain :)


The whole migration took my about an hour and was mostly setting up the things/plugins for my WordPress than it was WAWS preparing the deployment. The experience is really fast and responsive, I love the new UI on the portal and I hope you do too.

Any comments/remarks/questions are, as always, welcome!






Running JBoss 7 on Windows Azure — Part II

Continuing on where I left it on my previous post, I’m going to explain how the Announcement service works and why we choose that approach.

The way JBoss and mod_proxy work now is that every time something changes in the topology, either a new proxy is added or removed or a JBoss node, then the proxy list has to be updated and both the node and the proxy have to be aware of their existence.

Mod_proxy is using multicast to announce itself to the cluster but as this is not supported on Windows Azure, we created our own service that runs on the proxy and on the node also. Each time a new proxy or a node is added/removed, the service notifies the rest of the instances that something changed in the topology and they should update their lists with the new record.

The service is not running under a dedicated WorkerRole but it’s part of the same deployment as the proxy and the JBoss node. It’s a WCF service hosted inside a Windows NT Service listening on a dedicated port. That approach gives us greater flexibility as we keep a clear separation of concern between the services on the deployment and we don’t mix code and logic that has to do with the proxy, with the Announcement service. Originally the approach of using an NT Service caused some concerns as how this service is going to be installed on the machines and how can we keep one single code base for that service, running on both scenarios.

First of all, you should be aware that any port you open through your configuration is only available to the host process of the Role. That means if the port is not explicitly open again on the firewall, your service won’t be able to communicate as the port it’s blocked. After we realized that, we fix it by adding an extra line to our Startup Task which was installing the service on the machines. The command looks like this:

which is part of the installer startup task

To make our service even more robust and secure we introduced a couple of NetworkRules that they only allow communication between Proxies and Jboss nodes:

Any kind of communication between the services is secured by certificate based authentication and message level encryption. The service it’s a vital component in our approach and we want it to be as secure as possible.

The service is monitoring a couple of things that helps us also collect telemetry data from the Jboss nodes, but it’s also wired to a couple of RoleEnvironment events like OnStopping and OnChanged. Everytime there is an OnStopping, we send messages out to all of the other service instances to de-register that proxy from their list because it’s going down. Also, the service itself is checking on specific intervals if the others nodes are alive. If they don’t respond after 3 times, they are removed. The reason we do this, is to handle possible crashes of the proxy as fast as possible. Lastly, everytime there is an OnChanged event fired, we verify that everything is as we know they should be (nodes available etc).

Next post in the series, the cluster setup.