Azure Service Configuration Updated (or “Where did RoleManager go?”)

In May of 2009, I wrote a blog posting that covered the basics of Azure Service Configuration Options. When I was looking through my blog statistics lately, I noted this posting is seeing a significant amount of traffic. This is tragic because the November release of the Windows Azure platform broke much of what I discuss in that posting. So I figured it was time to do right by that topic and make an update. 🙂

So here we are!

First the good news

So the good news is that some things haven’t changed. Our Azure hosted roles still have their own individual configuration files. These files still get deployed to the Azure cloud in the cspkg (cloud service package). We also still have the ServiceConfiguration and ServiceDefinition files and like before, they behave exactly as you would expect.

The best news IMHO is that we finally have published schemas for the cloud service definition and configuration files. Woot! The MSDN pages will do a better job then I could of explain the details of these schemas so I just want to call out a couple important changes.

In the service definition file, we can now specify the size of our VM instance using the new vmsize attribute of the WebRole node. The size controls if you have a small (aka single core) (1.6ghz CPU and 1.75gb of RAM) instance or larger. Each instance above small is just double the size before it (1,2,4,8).

Full details on adjusting VM size can be found at:

Next up is that we can control the version of the Azure VM’s operating system we want to run. This is important because folks (like myself) had concerns about automatic upgrades to the VM’s breaking code and not being able to test our applications before we make the transition. With the new osVersion attribute of the ServiceConfiguration node, we can now control this. Imagine if you will having an already deployed service that is running smoothly on version 1.0 of the Azure OS. Along comes 1.1 and we want to test it. So we deploy our service into staging with the osVersion set to 1.1. We can then run our tests and ensure things are still running smoothly. Once we have verified the service behavior, we can swap the staging and production instances. We’re golden.

Accessing our Service Configuration Settings

In that post 10 months ago I talked about using the GetConfigurationSetting method of the RoleManager class to retrieve customer settings from the service configuration file. It was a simple enough approach. Problem is, the entire Microsoft.ServiceHosting.ServiceRuntime namespace was deprecated in the November release, thus removing four classes contained there-in including RoleManager. The functionality previously contained in that namespace has now been split and expanded into Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.Diagnostics.

So in the post November 2009 Azure world we get our configuration settings like this:

string tmpMySetting = RoleEnvironment.GetConfigurationSettingValue("myValue");

Still simple enough, and just scratching the surface of what we can do with the the classes of the ServiceRuntime namespace.

But… but… what about… ???

Admittedly, this is pretty short. There’s a significant amount of functionality that I’m only brushing by with this update. But heck, the Diagnostics namespace alone contains enough functionality to write entire chapters about. And the ServiceRuntime namespace gives you a slew of new ways to find out more about the individual service. I’m already in the process of exploring both of these in more detail so look for additional articles once I have some solid information to share. I’ve got a project I’m starting just next week where I’m porting some traditional WCF services to Azure and as part of this, I’ll be spending a non-trivial amount of time focusing on how to make sure that the Azure instances are logging sufficient information to allow for monitoring of the migrated services and even integrating them with on-premise monitoring solutions.

So until next time!


Introduction to SQL Azure

Yet another simple attempt to document my continuing adventures with the Windows Azure Platform. In today’s edition, our hero attempts to host a simple on-premise database to SQL Azure and access it from an on-premise application.

Create our SQL Azure Server/Database

I’ve covered getting Windows Azure platform benefits in another post. So we’ll presume you have already either purchased a subscription for SQL Azure or claimed any benefits you’re entitled too. Now my MSDN Premium subscription benefits include 3, 1gb instances of SQL Server. Today I’m going to setup my SQL Azure server and create one of those instances.

I start by pointing my favorite browser to and signing in with the Live-ID that is associated with my subscription. Once signed in, you should be at the SQL Azure Developer Portal’s landing page (as seen below) with a list of “projects” (aka subscriptions) we have associated with my Live-ID displayed.


You’ll click on a project project and if prompted, accept the terms of service. Since this is my first time logging into my SQL Azure subscription, I need to start by creating a SA (system admin) account and specifying a region for my database. The region selection here is important for reasons I’ll get to later in this article.


So fill in the form, select a location (aka datacenter), and click on “Create Server”. After a few seconds, our server has been partitioned and we have a master database already created within it. The only thing that was lacking here is that I really would have liked to been able to try and provide a unique name for my server. Oh well. 🙂


At this point we need to go to the “Firewall Settings” tab and adjust the security settings for our database. If we don’t, we won’t be able to connect to it. These settings will apply to all database instances within our SQL Azure Server. For now, I’m going to set it to allow for any IP to connect. This is not something I would generally recommend. It would be a good practice to add settings into this to only allow connections from our application, network submask, or maybe even only from Microsoft hosted services. But I’m feeling a touch lazy today.

Once we’ve given our firewall settings a few minutes to be applied, select the “master” database instance and test connectivity. Once verified, we’re ready to get connected.

Connecting SQL Azure

There are already many blog posts available that explain how to get SQL Server 2008 Management Studio to connect to SQL Azure. However, if you tag the SQL Server Management Studio 2008 R2 CTP you’ll find this less problematic. Using this version, it’s as simple as putting in your server name (available from the portal and in the form of <somevalue> into the login box along with your Administrator username and password.

If you can’t use the R2 CTP, you can find alternatives to get SQL Server 2008 Management Studio working in various blog articles.

Once that connection is established, I can run a couple exported SQL Scripts I already have to create my database. I generated these scripts from a local SQL database. In my case, the scripts required only two minor changes before I got them to run successfully.

Connecting to it from the Visual Studio 2010 RC’s Server Explorer is just as simple. Same goes for having your applications connect, you just use a connection string and you’re in. The only thing all these methods have in common is that you need to have port 1433 open for outbound traffic and the IP address you are connecting from has to be allowed via the SQL Azure firewall settings.

I started by creating a new windows forms project and adding a data source to it. I could have set this using a connection string I generated from the portal or via any preconfigured data connections. I then added that data source to my form and launched the project. Simple as pie! Without writing a single line of code I could connect to and update my SQL Azure database from an on-premise application.

Why was that region selection important?

I said I’d get back to this and I have.

The reason that the region selection is so important is because you don’t get charged for bandwidth within a datacenter (region). Additionally, by locating your database within the same center as your application, you reduce any connection latency. Yeah, we could host the DB in Asia and run the app in the US, but that’s only going to slow down performance AND cost us more. So there’s no reason we should do it. This is also why you may want to run a local copy of the database when doing development. No sense paying bandwidth costs for accessing a hosted server when a local copy of SQL Express will do the job nicely.

But wait, what about the infamous “smoking hole” disaster recover scenario? Does picking another region affect my disaster recovery plan? The short answer is “no”. People with a greater insight into how SQL Azure is built tell us that there are 3 copies of your database are automatically created and maintained and that at least one of those should be geographically diversified. If your database crashed, or the data center itself is destroyed by a giant radiation mutated lizard, a backup copy will be activated and the logs applied.

In fact, you may not even notice if a simple database crash takes place. We’ll leave the question of “but don’t I need to know” for another day.

Some closing thoughts

In going through this exercise, there are a couple things that become apparent to me. First off, there are going to be some that will detract SQL Azure as being too “dumbed down”. I’m admittedly not a DB guy. I know enough to stay away from bad practices, create the stored procedures/functions I need to do my job, and can tell the difference between a physical and logical database schema. Hell, I’ve even been known to help debug a query performance issue on rare occasion. However, I am by no means someone that enjoys spending their time optimizing databases and monitoring their performance. As such, SQL Azure does a good job of meeting my basic needs. In its current state, it may not be an enterprise level solution. But you, I think it makes a pretty decent operational data store that would serve a simple application pretty well.

For those that are really into RDBMS systems and love tweaking and tuning them like a hot-dog prepping for the quarter mile, they are likely to be disappointed by SQL Azure. However, I’m confident that the SQL Azure team is committed to this product. We’ve already seen the 1.1 version released with changes based directly on feedback they’re received from the community. I also believe that the current size limits are based more on ensuring they have a stable service offering then on any real limitation. It wouldn’t surprise me if we see SSRS and +25gb instances available before the end of the year. And I don’t think its too far fetched to predict that +1tg instances will happen eventually.

I’m also getting more of an idea of what MSFT may be thinking when it comes to the online subscriptions. This is admittedly a fairly new area for them (compared to license based software distribution) and as such is subject to change, but it really strikes me that they are trying to encourage folks to tie given applications to a specific subscription rather than have multiple projects within a single subscriber account. It could go either way, but for enterprises that will be interested in billing charges back, it makes sense to have a subscription for each “project” for simplicity.

I guess only the future will really tell the tale.