Enhanced Visual Studio Publishing (Year of Azure–Week 19)

With the latest 1.6 SDK (ok, now its actually called Azure Authoring Tools), Scott Guthrie’s promise of a better developer publishing experience has landed. Building upon the multiple cloud configuration options that were delivered back in September with the Visual Studio tools update, we have an even richer experience.

Now the first thing you’ll notice is that the publish dialog has changed. The first time you run it, you’ll need to sign in and get things set up.

image

Clicking the “Sign in to download credentials” link will send you to the windows.azure.com website where a publish-settings file will be generated for you to download. Following the instructions, you’ll download the file, then import it into the publishing window shown above. Then you can chose a subscription from the populated drop down and proceed.

A wee bit of warning on this though. If you have access to multiple subscriptions (own are or a co-admin), the creation of a publish-settings file will install the new certificate in each subscription. Additionally, if you click the the “Sign in to download”, you will end up with multiple certs. These aren’t things to be horrified about it, just wanted to make sure I gave a heads up.

Publish Settings

Next up is the publication settings. Here we can select a service to deploy too or create a new one (YEAH!). You can also easily set the environment (production or staging), the build configuration, and the service configuration file to be used. Setting up remote desktop is also as easy as a checkbox.

image

In the end, these settings to get captured into a ‘profile’ that is saved and can then be reused. Upon completion, the cloud service will get a new folder, “Profiles”. In this folder you will find an xml file with the extension azurePubxml that contains the publication settings.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <AzureCredentials>my subscription</AzureCredentials>
    <AzureHostedServiceName>service name</AzureHostedServiceName>
    <AzureHostedServiceLabel>service label</AzureHostedServiceLabel>
    <AzureSlot>Production</AzureSlot>
    <AzureEnableIntelliTrace>False</AzureEnableIntelliTrace>
    <AzureEnableProfiling>False</AzureEnableProfiling>
    <AzureEnableWebDeploy>False</AzureEnableWebDeploy>
    <AzureStorageAccountName>bmspublic</AzureStorageAccountName>
    <AzureStorageAccountLabel>bmspublic</AzureStorageAccountLabel>
    <AzureDeploymentLabel>newsdk</AzureDeploymentLabel>
    <AzureSolutionConfiguration>Release</AzureSolutionConfiguration>
    <AzureServiceConfiguration>Cloud</AzureServiceConfiguration>
    <AzureAppendTimestampToDeploymentLabel>True</AzureAppendTimestampToDeploymentLabel>
    <AzureAllowUpgrade>True</AzureAllowUpgrade>
    <AzureEnableRemoteDesktop>False</AzureEnableRemoteDesktop>
  </PropertyGroup>
</Project>

This file contains a reference to a storage account and when I looked at the account I noticed that there was a new container in there called “vsdeploy”. Now the folder was empty but I’m betting this is where the cspkg was sent to before being deployed and subsequently deleted. I only wish there was an option to leave the package there after deployment. I love having old packages in the cloud to easily reference.

If we go back into the publish settings again (you may have to click “previous” a few times to get back to the “settings” section_ and can select “advanced” you can set some of the other options in this file. Here we can set the storage account to be used as well as enable IntelliTrace and profiling.

The new experience does this using a management certificate that was created for us at the beginning of this process. If you open up the publish settings file we downloaded at the beginning, you’ll find its an XML document with an encoded string representing the management certificate to be used. Hopefully in a future edition, I’ll be able to poke around at these new features a bit more. It appears we may have one of more new API’s at work as well as some new options to help with service management and build automation.

What next?

There’s additional poking around I need to do with these new features. But there’s some great promise here. Out of the box, developers managing one or two accounts are going to see HUGE benefits. For devs in large, highly structured and security restricted shops, they’re more likely to keep to the existing mechanisms or looking at leveraging this to enhance their existing automated processes.

Meanwhile, I’ll keep poking at this a little bit as well as the other new features of this SDK and report back when I have more.

But that will have to wait until next time. Smile

SQL Azure Error Codes (Year of Azure–Week 17)

Ok, I’m no counting last week as a ‘Year of Azure’ post. I could,but I feel it was even too much of a softball for me to bare. Unfortunately, I was even less productive this week in getting a new post out. I started a new client and the first weeks, especially when travelling are horrible for me doing anything except going back to the hotel and sleeping.

However, I have spent time the last few week working over a most difficult question. The challenges of SQL Azure throttling behaviors and error reporting.

Reference Materials

Now, on the surface SQL Azure is a perfect wonderful relational database solution. However, when you begin subjecting it to a significant load, its limitations start becoming apparent. And when this happens, you’ll find you get back various error codes that you have to decode.

Now, I could dive into an hours long discussion regarding architectural approaches for creating scalable SQL Azure data stores. A discussion mind you which would be completely enjoyable, very thought provoking, and for which I’m less well equipped then many folks (databases just aren’t my key focus, I leave those to better…. er…. more interested people *grin*). For a nice video on this, be sure to check out the TechEd 2011 video on the subject

Anyways…

Deciphering the code

So if you read the link on error codes, you’ll find that there’s several steps that need to be decoded. Fortunately for me. While I have been fairly busy, I have access to a resource that wasn’t. One fairly brilliant Andrew Espenes. Now Andrew was kind enough to take on a task for me and look at deciphering the code. And in a show of skill that demonstrates to me I’m becoming far older then I would like to believe,

Anyways, pulled together some code that I wanted to share. Some code that leverages a technique I haven’t used since my college days of developing basic assembly (BAL) code. Yes, I am that old.

So lets fast forward down the extensive link I gave you earlier to the “Decoding Reason Codes” section. And our first stop will actually be adjust the reason code into something usable.  The MSDN article says to apply modulo 4 to the reason code:

ThrottlingMode = (AzureReasonDecoder.ThrottlingMode)(codeValue % 4);

Next determine the resource type (data space, CPU, Worker Threads, etc…):

int resourceCode = (codeValue / 256);

And finally, we’ll want to know the throttling type (hard vs. soft):

int adjustedResourceCode = resourceCode & 0x30000;
adjustedResourceCode = adjustedResourceCode >> 4;
adjustedResourceCode = adjustedResourceCode | resourceCode;
resourceCode = adjustedResourceCode & 0xFFFF;
ResourcesThrottled = (ResourceThrottled)resourceCode;

Next Time

Now I warned you that I was short on time, and while I have some items I’m working on for future updates I do want to spend some time this weekend with family. So I need to hold some of this until next week when I’ll post a class Andrew created for using these values and some samples for leveraging them.

Until next time!

Windows Azure In-place Upgrades (Year of Azure – Week16)

On Wednesday, Windows Azure unveiled yet another substantial service improvement, enhancements to in-place upgrades. Before I dive into these enhancements and why they’re important, I want to talk first about where we came from.

PS – I say “in-place upgrade” because the button on the windows azure portal is labeled “upgrade”. But the latest blog post calls this an “update”. As far as I’m concerned, these are synonymous.

Inside Windows Azure

If you haven’t already, I encourage you to set aside an hour, turn off your phone, email, and yes even twitter so you can watch Mark Russinovich’s “Inside Windows Azure” presentation. Mark does an excellent job of explaining that within the Windows Azure datacenter, we have multiple clusters. When you select an affinity group, this tells the Azure Fabric Controller to try and put all resources aligned with that affinity group into the same cluster. Within a cluster, you have multiple server racks, each with multiple servers, each with in turn multiple cores.

Now these resources are divided up essentially into slots, with each slot being the space necessary for a small size Windows Azure Instance (1 1.68ghz core, and 1.75gb of RAM). When you deploy your service, the Azure Fabric will allocate these slots (1 for a small, 2 for a medium, etc…) and provision a guest virtual machine that allocates those resources. It also sets up the VHD that will be mounted into that VHD for any local storage you’ve requested, and configure firewall and load balancers for any endpoints you’ve defined.

These parameters, the instance size, endpoints, local storage… are what I’ve taken to calling the Windows Azure service signature.

Now if this signature wasn’t changing, you had the option of deploying new bits to your cloud service using the “upgrade” option. This allowed you to take advantage of the upgrade domains to do a rolling update and deploy functional changes to your service. The advantage of the in-place upgrade, was that you didn’t “restart the clock” on your hosts costs (the hourly billing for Azure works like cell phone minutes), and it was also faster since the provisioning of resources was a bit more streamlined. I’ve seen a single developer deploying a simple service eat through a couple hundred compute hours in a day just by deleting and redeploying. So this was an important feature to take advantage of whenever possible.

If we needed to change this service signature, we were forced to either stop/delete/redeploy our services or deploy to another slot (staging or a separate service) and perform either a VIP or DNS swap. With this update, much of these imitations have been removed. This was because in the case of a change in size, you may have to move the instance to a new set of “slots” to get the resources you wanted. For the firewall/load balancer changes, I’m not quite sure what the limitation was. But this was life as we’ve known it in Azure for last (dang, has if really been this long?)… 2+ years now.

What’s new?

With the new enhancements we can basically forget about the service signature. The gloves are officially off! We will need to the 1.5 SDK to take advantage of changes to size, local storage, or endpoints, but that’s a small price to pay. Especially since the management API also already supports these changes.

The downside, is that the Visual Studio tools do not currently take advantage of this feature. However, with Scott “the Gu” Guthrie at the helm of the Azure tools, I expect this won’t be the case for long.

I’d dive more into exactly how to use this new feature, but honestly the team blog has done a great job and and I can’t see myself wanting to add anything (aside from the backstory I already have). So that’s all for this week.

Until next time!

Configuration in Azure (Year of Azure–Week 14)

Another late post, and one that isn’t nearly what I wanted to do. I’m about a quarter of the way through this year of weekly updates and frankly, I’m not certain I’ll be able to complete it. Things continue to get busy with more and more distractions lined up. Anyways…

So my “spare time” this week has been spent looking into configuration options.

How do  know where to load a configuration setting from?

So you’ve sat through some Windows Azure training and they explained that you have the service configuration and you should use it instead of the web.config and they covered using RoleEnvironment.GetConfigurationSettingValue. So you know how to get a setting from with location? This is where RoleEnvironment.IsAvailable comes into play.

Using this value, we an write code that will pull from the proper source depending on the environment our application is running in. Like the snippet below:

if (RoleEnvironment.IsAvailable)
    return RoleEnvironment.GetConfigurationSettingValue("mySetting");
else
    return ConfigurationManager.AppSettings["mySetting"].ToString();

Take this a step further and you can put this logic into a property so that all your code can just reference the property. Simple!

But what about CloudStorageAccount?

Ok, but CloudStorageAccount has methods that automatically load from the service configuration. If I’ve written code to take advantage of this, they’re stuck. Right? Well not necessarily. Now you may have a seen a code snippet like this before:

CloudStorageAccount.SetConfigurationSettingPublisher(
    (configName, configSetter) =>
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))
);

This is the snippet that needs to be done to help avoid the “SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used.” error message. But what is really going on here is that we are setting a handler for retrieving configuration settings. In this case, RoleEnvironment.GetConfigurationSettingValue.

But as is illustrated by a GREAT post from Windows Azure MVP Steven Nagy, you can set your own handler, and in this handler you can role your own provider that looks something like this:

public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
    if (RoleEnvironment.IsAvailable)
        return (configName, configSetter) =>
      configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
    return (configName, configSetter) =>
        configSetter(ConfigurationManager.AppSettings[configName]);
}

Flexibility is good!

Where to next?

Keep in mind that these two examples both focus on pulling from configuration files already available to us. There’s nothing stopping us from creating methods that pull from other sources. There’s nothing stopping us from creating methods that can take a single string configuration setting that is an XML document and hydrate it. We can pull settings from another source, be it persistent storage or perhaps even another service. The options are up to us.

Next week, I hope (time available of course) to put together a small demo of how to work with encrypted settings. So until then!

PS – yes, I was renewed as an Azure MVP for another year! #geekgasm

Leveraging the RoleEntryPoint (Year of Azure – Week 12)

So the last two weeks have been fairly “code lite”, my apologies. Work has had me slammed the last 6 weeks or so and it was finally catching up with me. I took this week off (aka minimal conference calls/meetings), but today my phone is off, and I have NOTHING on my calendar. So I finally wanted to turn my hand to a more technical blog post.

When not architecting cloud solutions or writing code for clients, I’ve been leading Azure training. Something that usually comes up, and I make a point of diving into fairly well, its how to be aware of changes in a service’s environment. In the pre 1.3 SDK days, the default role entry points always had a method in them to handle configuration changes. But since that has gone away and we have the ability to use startup scripts, not as much attention gets paid to these things. So today we’ll review it and call out a few examples.

Yeah for code samples!

Methods and Events

There are two groups of hooks that allow us to respond to events or changes in role state/status; methods declared in the the RoleEntryPoint class and events in the RoleEnvironment class.But before we dive into these two, we should understand the lifecycle of a role instance.

According to an excellent post by the azure team from earlier this year, the sequence of events in role instances we can respond to are, OnStart, Changing, Changed, Stopping, and OnStop. I’ll add two items to this, Run, which follows OnStart, and StatusCheck which is used by the Azure agent to determine if the instance is “ready” to receive requests from the load balancer, or is “busy”.

So lets walk through all these one by one.

OnStart is where it all begins. When a role instance is started the Azure Agent will reflect over the role’s primary assembly and upon finding a class that inherits from RoleEntryPoint, it will call that class’s OnStart method. Now by default, that method will usually look like this:

public override bool OnStart()
{
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;

    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

    return base.OnStart();
}

And if we created a WorkerRole, we’ll also have a default Run method that looks like this:

public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
    {
        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");

    }
}

The Run will be called after the OnStart. But here is a curveball, we can add a Run method to a webrole and it will be called by the Azure Agent.

Next up, we have the OnStop method.

public override void OnStop()
{
    try
    {
        // Add code here that runs when the role instance is to be stopped
    }
    catch (Exception e)
    {
        Trace.WriteLine("Exception during OnStop: " + e.ToString());
        // Take other action as needed.
    }
}

This method is a great place to try and allow our instance to shut down in a controlled and graceful manner. The catch is that we can’t take more than 30 seconds or the instance will be shut down hard. So anything we’re going to do, we’ll need to do quickly.

We do have another opportunity to start handling shut down. the RoleEnvironment.Stopping event. This is called once the instance has been taken out of the load balancer, but isn’t called when the guest VM is rebooted. Because this is an event, we have to create not just the event handler, but also wire it up:

RoleEnvironment.Stopping += RoleEnvironmentStopping;

private void RoleEnvironmentStopping(object sender, RoleEnvironmentStoppingEventArgs e)
{
    // Add code that is run when the role instance is being stopped
}

Now related to the load balancer, and another event we can handle is the StatusCheck. This can be used to tell the Agent if the role instance should or shouldn’t get requests from the load balancer.

RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;

// Use the busy object to indicate that the status
// of the role instance must be Busy
private volatile bool busy = false;

private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
   if (this.busy)
      e.SetBusy();
}

But we’re not done yet…

Handling Environment Changes

Now there are two more events we can handle, Changing and Changed. These events are ideal for handling changes to the service configuration. We can optionally decide to restart our role instance by setting the event’s RoleEnvironmentChangingEventArgs.Cancel property to true during the Changing event.

RoleEnvironment.Changing += RoleEnvironmentChanging;

private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}

RoleEnvironment.Changed += RoleEnvironmentChanged;

private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}

The real value in both these is detecting and handling changes. If we just want to iterate through changes, we can put in a code block like this:

// Get the list of configuration changes
var settingChanges = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>();

foreach (var settingChange in settingChanges)
{
    var message = "Setting: " + settingChange.ConfigurationSettingName;
    Trace.WriteLine(message, "Information");
}

If you wanted to only handle Topology changes (say a role instance being added or removed), you would use a snippet like this:

// topology changes
var changes = from ch in e.Changes.OfType<RoleEnvironmentTopologyChange>()
              where ch.RoleName == RoleEnvironment.CurrentRoleInstance.Role.Name
              select ch;
if (changes.Any())
{
    // Topology change occurred in the current role
}
else
{
    // Topology change occurred in a different role
}

Lastly, there are times where you may only be updating a configuration setting If you want to test for this, then we’d use a snippet like this:

if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
    e.Cancel = true; // don't recycle the role
}

Discovery of the RoleEntryPoint

This all said, there are two common questions that come up: how does Azure find the entry point and can I set up a common entry point to be used by multiple roles? I’ll address the later first.

The easiest way I’ve found to create a shared RoleEntryPoint is to set it up in its own class library, then add a reference to that library to all role instances. In each role instance, change their default RoleEntryPoints to inherit from the shared class. Simple enough to set up (takes less than 5 minutes) and easy for anyone used to doing object oriented programming to wrap their heads around.

The first question, about discovery is a bit more complicated. If you read through that entire thread, you’ll find references to a “targets” file and the cloud service project. Prior to the new 1.5 SDK, this was true. But 1.5 introduced an update to the Web and Worker role schemas, a new element that we can use to specify the assembly to be searched for the RoleEntryPoint, NetFxEntryPoint. Using this, you can point directly to an assembly that contains the RoleEntryPoint.

Both approaches work, so use the one that best fits your needs.

And now we exit…

Everything I’ve put in this post is available in the MSDN files. So I haven’t really built anything new here. But what I have done is create a new Cloud Service project that contains all the methods and events and even demonstrates the inheritance approach for a shared entry point. It’s a nice reference project that you can copy/paste from when you need examples without having to hunt through MSDN. You can download it from here.

That’s all for this week. So until next time!

The Virtual DMZ (Year of Azure Week 10)

Hey folks, super short post this week. I’m busy beyond belief and don’t have time for much. So what I want to call out is something that’s not really new, but something that I believe hasn’t been mentioned enough. Securing services hosted in Windows Azure so that only the parties I want to have connect to can.

In a traditional infrastructure, we’d use Mutual Authentication and certificates. Both communicating parties would have to have a certificate installed, and exchange/validate that when establishing a connection. If you only share/exchange certificates with people you trust, this makes for a fairly easy way to secure a service. The challenge however, is that if you have 10,000 customers you connect with in this manner, you now have to coordinate a change in the certificates with all 10,000.

Well, if we add in the Azure AppFabric’s Access Control Service, we can help mitigate some of those challenges. Set up a rule that will take multiple certificates and issue  single standardized token. I’d heard of this approach awhile back but never had time to explore it or create a working demo of it. Well I needed one recently so I sent out some network calls to get a demo recently and fortunately had a colleague down in Texas found something ON MSDN that I’d never run across, How To: Authenticate with a Client Certificate to a WCF Service Protected by ACS.

I’ve taken lately to referring to this approach as the creation of a “virtual DMZ”. You have on or more publically accessible services running in Windows Azure with input endpoints. You then, secured by certificates an the ACS, have another set of “private” services, also with input endpoints.

A powerful option, and one that by using the ACS isn’t overly complex to setup or manage. Yes, there’s an initial hit with calls to the secured services because they first need to get a token from the ACS before calling the service, but they can then cache that token until it expires to make sure subsequent calls are not impacted as badly.

So there we have it. Short and sweet this week, and sadly sans any code (again). So until next time… send me any suggestions for code samples. Smile

Uploading Image Blobs–Stream vs Byte Array (Year of Azure Week 4)

Ok, I promised you with when I started some of this Year of Azure project that some of these would be short. Its been a busy week so I’m going to give you a quickly based on a question posted on MSDN Azure forums.

The question centered around getting the following code snippit to work (I’ve paraphrased this a bit).

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21.  
  22. blob.UploadFromStream(streams);
  23.  
  24. imgs.Dispose();
  25. streams.Close();

Now the crux of the problem was that the resulting image in storage was empty (zero bytes). And Steve Marx correctly pointed out, the key thing missing is the resetting the buffer to position zero. So the corrected code would look like this. Note the addition of line 22. If fixes things just fine.

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21.  
  22. streams.Seek(0, SeekOrigin.Begin);
  23. blob.UploadFromStream(streams);
  24.  
  25. imgs.Dispose();
  26. streams.Close();

But the root issue I still have here is that the original code sample is pulling a byte array but not doing anything with it. But a byte array is still a valid method of uploading the image. So I reworked the sample a bit to support this..

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21. blob.UploadByteArray(imageBytes);
  22.  
  23. imgs.Dispose();
  24. streams.Close();

We pulled out the reset of the stream and replaced UploadFromStream and replaced it with UploadByteArray.

Funny part is that while both samples work, the resulting blobs are different size. And since these aren’t the only way to upload files, there might be other sizes available. But I’m short on time so maybe we’ll explore that a bit further another day . The mysteries of Azure never cease.

Next time!