Azure Success Inhibitors

I was recently asked to provide MSFT with a list of our top 4 “Azure Success Inhibitors”. After talking with my colleagues and compiling a list I of course sent it in. It will get discussed I’m sure, but I figured why not toss this list out for folks to see publically and heaven forbid, use the comments area of this blog to provide some feedback on. Just keep in mind that this is really just a “top 5” and is by no means an exhaustive list.

I’d like to publically thanks Rajesh, Samidip, Leigh, and Brian for contributing to the list below.

Startups & Commercial ISVs

Pricing – Azure is competitively priced only on regular Windows OS images. If we move to “high CPU”, “high memory”, or Linux based images, Amazon is more competitive. Challenge is getting them to not focus on just hosting costs but also would like to see more info on plans for non-Windows OS hosting.

Perception/Culture – Microsoft is still viewed as “the man” and as such, many start-ups still subscribe to the open source gospel of avoiding the established corporate approaches whenever possible.

Cost Predictability – more controls to help protect from cost overruns as well as easier to find/calculate fixed pricing options.

Transition/Confusion – don’t understand PaaS model well enough to see feel comfortable making a commitment. Prefer to keep doing things the way they always have. Concerns over pricing, feature needs, industry pressure, etc… In some cases, it’s about not wanting to walk away from existing infrastructure investments.


Trust – SLA’s aren’t good enough. The continued outages, while  minor still create questions. This also impacts security concerns (SQL Azure encryption please) which are greatly exaggerated the moment you start asking for forensic evidence in cause you need to audit a breach. In some cases, it’s just “I don’t trust MSFT to run my applications”. This is most visible when looking at regulatory/industry compliance (HIPAA, PCI, SOX, etc…).

Feature Parity – The differences in offerings (Server vs. Azure AppFabric, SQL Azure vs. SQL Server) create confusion. Means loss of control as well as reeducation of IT staff. Deployment model differences create challenges for SCM and monitoring of cloud solutions creates nightmares for established Dev Ops organizations.

Release Cadence – We discussed this on the DL earlier. I get many client that want to be able to test things before they get deployed to their environments and also control when they get “upgraded”. This relates directly to the #1 trust issue in that they just don’t trust that things will always work when upgrades happen. As complexity in services and solutions grows, they see this only getting harder to guarantee.

Persistent VM’s – SharePoint, a dedicated SQL Server box, Active Directory, etc…. Solutions they work with now that they would like to port. But since they can’t run them all in Azure currently, they’re stuck doing hybrid models which drags down the value add for the Windows Azure platform by complicating development efforts.


Value Added Services – additional SaaS offerings for email, virus scan, etc… Don’t force them to build it themselves or locate additional third party providers. Some of this could be met by a more viable app/service marketplace as long as it provides integrated billing with some assurances of provider/service quality from MSFT.


Windows Azure In-place Upgrades (Year of Azure – Week16)

On Wednesday, Windows Azure unveiled yet another substantial service improvement, enhancements to in-place upgrades. Before I dive into these enhancements and why they’re important, I want to talk first about where we came from.

PS – I say “in-place upgrade” because the button on the windows azure portal is labeled “upgrade”. But the latest blog post calls this an “update”. As far as I’m concerned, these are synonymous.

Inside Windows Azure

If you haven’t already, I encourage you to set aside an hour, turn off your phone, email, and yes even twitter so you can watch Mark Russinovich’s “Inside Windows Azure” presentation. Mark does an excellent job of explaining that within the Windows Azure datacenter, we have multiple clusters. When you select an affinity group, this tells the Azure Fabric Controller to try and put all resources aligned with that affinity group into the same cluster. Within a cluster, you have multiple server racks, each with multiple servers, each with in turn multiple cores.

Now these resources are divided up essentially into slots, with each slot being the space necessary for a small size Windows Azure Instance (1 1.68ghz core, and 1.75gb of RAM). When you deploy your service, the Azure Fabric will allocate these slots (1 for a small, 2 for a medium, etc…) and provision a guest virtual machine that allocates those resources. It also sets up the VHD that will be mounted into that VHD for any local storage you’ve requested, and configure firewall and load balancers for any endpoints you’ve defined.

These parameters, the instance size, endpoints, local storage… are what I’ve taken to calling the Windows Azure service signature.

Now if this signature wasn’t changing, you had the option of deploying new bits to your cloud service using the “upgrade” option. This allowed you to take advantage of the upgrade domains to do a rolling update and deploy functional changes to your service. The advantage of the in-place upgrade, was that you didn’t “restart the clock” on your hosts costs (the hourly billing for Azure works like cell phone minutes), and it was also faster since the provisioning of resources was a bit more streamlined. I’ve seen a single developer deploying a simple service eat through a couple hundred compute hours in a day just by deleting and redeploying. So this was an important feature to take advantage of whenever possible.

If we needed to change this service signature, we were forced to either stop/delete/redeploy our services or deploy to another slot (staging or a separate service) and perform either a VIP or DNS swap. With this update, much of these imitations have been removed. This was because in the case of a change in size, you may have to move the instance to a new set of “slots” to get the resources you wanted. For the firewall/load balancer changes, I’m not quite sure what the limitation was. But this was life as we’ve known it in Azure for last (dang, has if really been this long?)… 2+ years now.

What’s new?

With the new enhancements we can basically forget about the service signature. The gloves are officially off! We will need to the 1.5 SDK to take advantage of changes to size, local storage, or endpoints, but that’s a small price to pay. Especially since the management API also already supports these changes.

The downside, is that the Visual Studio tools do not currently take advantage of this feature. However, with Scott “the Gu” Guthrie at the helm of the Azure tools, I expect this won’t be the case for long.

I’d dive more into exactly how to use this new feature, but honestly the team blog has done a great job and and I can’t see myself wanting to add anything (aside from the backstory I already have). So that’s all for this week.

Until next time!

My Presentations Posted (Year of Azure – Week 15)

Ok, barely getting this one in. It could be because it has been a week full of multiple priorities, or simply distractions (I did finally pick up Gears of War 3). Regardless, I have two seperate posts I’m working on but neither is ready yet. So instead of code I finally made the time to update the resource page with some additional powerpoints and other media.

I have a few more, but I’ve left out ones that were created specifically for clients. So enjoy!

Meanwhile, one quick tip to make up for the lack of a “real” update. Under isolated circumstances, RoleEnvironment.IsAvailable may not return the proper result when running in the 1.4 SDK’s development fabric. I’ve seen it happen where it won’t return the proper result if you all it anywhere but inside the RoleEntryPoint based classed. However, upgrading to the 1.5 SDK can quickly fix this.

Seattle Interactive Conference–Cloud Track

In learning about Windows Azure and the cloud, I’ve racked up my fair share of technical debt. Folks that have helped me out and I always try to make good on my debts. So when I was asked to help raise awareness of the Cloud Experience track at the upcoming Seattle Interactive Conference by Wade Wegner, I jumped at the chance.

Some of the top Windows Azure experts from Microsoft will be presenting on November 2nd for this conference. People like Wade, Nick, Nathan, and Steve whom I’ve come to know and with the exception of Steve respect highly (hi Steve! *grin*). As well as get Rob Tiffany and Scott Densmore. With this lineup, I can guarantee you’ll walk away with your head so crammed full of knowledge you’ll have a hard time remembering where you parked your car.

Now registration for this event is usually $350, but if you use promo code “azure 200”, you’ll get in for only $150! So stop waiting and register!.

Configuration in Azure (Year of Azure–Week 14)

Another late post, and one that isn’t nearly what I wanted to do. I’m about a quarter of the way through this year of weekly updates and frankly, I’m not certain I’ll be able to complete it. Things continue to get busy with more and more distractions lined up. Anyways…

So my “spare time” this week has been spent looking into configuration options.

How do  know where to load a configuration setting from?

So you’ve sat through some Windows Azure training and they explained that you have the service configuration and you should use it instead of the web.config and they covered using RoleEnvironment.GetConfigurationSettingValue. So you know how to get a setting from with location? This is where RoleEnvironment.IsAvailable comes into play.

Using this value, we an write code that will pull from the proper source depending on the environment our application is running in. Like the snippet below:

if (RoleEnvironment.IsAvailable)
    return RoleEnvironment.GetConfigurationSettingValue("mySetting");
    return ConfigurationManager.AppSettings["mySetting"].ToString();

Take this a step further and you can put this logic into a property so that all your code can just reference the property. Simple!

But what about CloudStorageAccount?

Ok, but CloudStorageAccount has methods that automatically load from the service configuration. If I’ve written code to take advantage of this, they’re stuck. Right? Well not necessarily. Now you may have a seen a code snippet like this before:

    (configName, configSetter) =>

This is the snippet that needs to be done to help avoid the “SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used.” error message. But what is really going on here is that we are setting a handler for retrieving configuration settings. In this case, RoleEnvironment.GetConfigurationSettingValue.

But as is illustrated by a GREAT post from Windows Azure MVP Steven Nagy, you can set your own handler, and in this handler you can role your own provider that looks something like this:

public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
    if (RoleEnvironment.IsAvailable)
        return (configName, configSetter) =>
    return (configName, configSetter) =>

Flexibility is good!

Where to next?

Keep in mind that these two examples both focus on pulling from configuration files already available to us. There’s nothing stopping us from creating methods that pull from other sources. There’s nothing stopping us from creating methods that can take a single string configuration setting that is an XML document and hydrate it. We can pull settings from another source, be it persistent storage or perhaps even another service. The options are up to us.

Next week, I hope (time available of course) to put together a small demo of how to work with encrypted settings. So until then!

PS – yes, I was renewed as an Azure MVP for another year! #geekgasm