Long Running Queue Processing Part 2 (Year of Azure–Week 20)

So back in July I published a post on doing long running queue processing. In that post we put together a nice sample app that inserted some messages into a queue, read them one at a time and would take 30 seconds to process each message. It did processing in a background thread so that we could monitor it.

This approach was all good and well but hinged on us knowing the maximum amount of time it would take us to process a message. Well fortunately for us in the latest 1.6 version of the Azure Tools (aka SDK), the storage client was updated to take advantage of the new “update message” functionality introduced to queues by an earlier service update. So I figured it was time to update my sample.

UpdateMessage

Fortunately for me given the upcoming holiday (which doesn’t leave my time for blogging given that my family lives in “the boonies” and haven’t yet opted for an internet connection much less broadband, updating a message is SUPER simple.

myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);

All we need is the message we read (which contains the pop-receipt the underlying API use to update the invisible mssage), the new timespan, and finally a flag to tell the API if we’re updating the message content/payload or its visibility. In the sample above we of course are setting its visibility.

Ok, time for turkey and dressing! Oh wait, you want the updated project?

QueueBackgroundProcess w/ UpdateMessage

Alright, so I took exactly the same code we used before. It inserts 5 messages into a queue, then reads and processes each individually. The outer processing loop looks like this:

while (true)
{
// read messages from queue and process one at a time…
CloudQueueMessage aMsg = myQueue.GetMessage(new TimeSpan(0,0,30)); // 30 second timeout
// trap no mesage.
if (aMsg != null)
{
Trace.WriteLine(“got a message, ‘”+aMsg.AsString+“‘”, “Information”);

// start processing of message
Work workerObject = new Work();
workerObject.Msg = aMsg;
Thread workerThread = new Thread(workerObject.DoWork);
workerThread.Start();

while (workerThread.IsAlive)
{
myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);
Trace.WriteLine(“Updating message expiry”);
Thread.Sleep(15000); // sleep for 15 seconds
}

if (workerObject.isFinished)
myQueue.DeleteMessage(aMsg.Id, aMsg.PopReceipt); // I could just use the message, illustraing a point
else
{
// here, we should check the queue count
// and move the msg to poison message queue
}
}
else
Trace.WriteLine(“no message found”, “Information”);

Thread.Sleep(1000);
Trace.WriteLine(“Working”, “Information”);
}

The while loop is the processor of the worker role that this all runs in. I decreased the initial visibility timeout from 2 minutes to 30 seconds, increased our monitoring of the background processing thread from every 1/10th of a second to 15 seconds, and added the updating of the message visibility timeout.

The inner process was also upped from 30 seconds to 1 minute. Now here’s where the example kicks in! Since the original read only listed a 30 second visibility timeout, and my background process will take one minute, its important that I update the visibility time or the message would fall back into view. So I’m updating it with another 30 seconds every 15 seconds, thus keeping it invisible.

Ta-da! Here’s the project if you want it.

So unfortunately that’s all I have time for this week. I hope all of you in the US enjoy your Thanksgiving holiday weekend (I’ll be spending it with family and not working thankfully). And we’ll see you next week!

Cloud Computing as a Financial Institution

Ok, I’m sure the title of this post is throwing you a bit but please bear with.

I’ve been travelling the last few weeks. I’m driving as its only about 250 miles away. And driving unlike flying leaves you with a lot of time to think. And this train of thought dawned on me yesterday as I was cruising down I35 south somewhere between Clear Lake and Ames in northern Iowa.

The conventional thinking

So the common theme you’ll often hear when doing “intro to cloud” presentations is comparing it to a utility. I’ve done this myself countless times. This story goes that as a business owner, what you need is say light. Not electricity, no a generator, not a power grid.

Just like the you utility company manages the infrastructure to delivery power to your door so when you turn on the switch, you get the light you wanted. You don’t have to worry about how it gets there. Best yet, you only pay for what you use. No needs to spend hundreds of millions on building a power plant and the infrastructure to deliver that power to your office.

I really don’t have an issue with this comparison. Its easy to relate to and does a good job of illustrating the model. However, what I realized as I was driving is that this example is a one way example. I’m paying a fee and getting something in return. But there’s no real trust issue except in the ability for my provider to give me the service.

Why a financial institution?

Ok, push aside the “occupy” movement and the recent distrust of bankers. A bank is where you put assets for safe keeping. You have various services you get from the provider (atm, checking account) that allow you to leverage those assets. You also have various charges that you will pay for using some of those services while others are free. You have a some insurance in place (FDIC) to help protect your assets.

Lastly, and perhaps most importantly, you need to have a level of trust in the institution. You’re putting your valuables in their care. You either need to trust that they are doing what you have asked them to do, or that they have enough transparency that you can know exactly what’s being done.

What really strikes me about this example is you having some skin in the game and needing to have a certain level of trust in your provider. Just like you trust the airline to get you to your destination on time, you expect your financial provider to protect your assets and deliver the services they have promised.

It’s the same for a cloud provider. You put your data and intellectual property in their hands, or you keep in under your mattress. Their vault is likely more secure then your box spring, but its what you are familiar with and trust. Its up to you to find cloud provider you can trust. You need to ask the proper questions to get to that point. Ask for a tour of the vault, audit their books so to speak. Do your homework.

Do you trust me?

So the point of all this isn’t to get a group of hippies camped out on the doorstep of the nearest datacenter. Instead,the idea here is to make you think about what you’re afraid of, especially when you’re considering a public cloud provider. Cloud Computing is about trusting your provider but also having responsibility for make sure you did your homework. If you’re going to trust someone with your most precious possessions, be sure you know exactly how far you can trust them.

Enhanced Visual Studio Publishing (Year of Azure–Week 19)

With the latest 1.6 SDK (ok, now its actually called Azure Authoring Tools), Scott Guthrie’s promise of a better developer publishing experience has landed. Building upon the multiple cloud configuration options that were delivered back in September with the Visual Studio tools update, we have an even richer experience.

Now the first thing you’ll notice is that the publish dialog has changed. The first time you run it, you’ll need to sign in and get things set up.

image

Clicking the “Sign in to download credentials” link will send you to the windows.azure.com website where a publish-settings file will be generated for you to download. Following the instructions, you’ll download the file, then import it into the publishing window shown above. Then you can chose a subscription from the populated drop down and proceed.

A wee bit of warning on this though. If you have access to multiple subscriptions (own are or a co-admin), the creation of a publish-settings file will install the new certificate in each subscription. Additionally, if you click the the “Sign in to download”, you will end up with multiple certs. These aren’t things to be horrified about it, just wanted to make sure I gave a heads up.

Publish Settings

Next up is the publication settings. Here we can select a service to deploy too or create a new one (YEAH!). You can also easily set the environment (production or staging), the build configuration, and the service configuration file to be used. Setting up remote desktop is also as easy as a checkbox.

image

In the end, these settings to get captured into a ‘profile’ that is saved and can then be reused. Upon completion, the cloud service will get a new folder, “Profiles”. In this folder you will find an xml file with the extension azurePubxml that contains the publication settings.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <AzureCredentials>my subscription</AzureCredentials>
    <AzureHostedServiceName>service name</AzureHostedServiceName>
    <AzureHostedServiceLabel>service label</AzureHostedServiceLabel>
    <AzureSlot>Production</AzureSlot>
    <AzureEnableIntelliTrace>False</AzureEnableIntelliTrace>
    <AzureEnableProfiling>False</AzureEnableProfiling>
    <AzureEnableWebDeploy>False</AzureEnableWebDeploy>
    <AzureStorageAccountName>bmspublic</AzureStorageAccountName>
    <AzureStorageAccountLabel>bmspublic</AzureStorageAccountLabel>
    <AzureDeploymentLabel>newsdk</AzureDeploymentLabel>
    <AzureSolutionConfiguration>Release</AzureSolutionConfiguration>
    <AzureServiceConfiguration>Cloud</AzureServiceConfiguration>
    <AzureAppendTimestampToDeploymentLabel>True</AzureAppendTimestampToDeploymentLabel>
    <AzureAllowUpgrade>True</AzureAllowUpgrade>
    <AzureEnableRemoteDesktop>False</AzureEnableRemoteDesktop>
  </PropertyGroup>
</Project>

This file contains a reference to a storage account and when I looked at the account I noticed that there was a new container in there called “vsdeploy”. Now the folder was empty but I’m betting this is where the cspkg was sent to before being deployed and subsequently deleted. I only wish there was an option to leave the package there after deployment. I love having old packages in the cloud to easily reference.

If we go back into the publish settings again (you may have to click “previous” a few times to get back to the “settings” section_ and can select “advanced” you can set some of the other options in this file. Here we can set the storage account to be used as well as enable IntelliTrace and profiling.

The new experience does this using a management certificate that was created for us at the beginning of this process. If you open up the publish settings file we downloaded at the beginning, you’ll find its an XML document with an encoded string representing the management certificate to be used. Hopefully in a future edition, I’ll be able to poke around at these new features a bit more. It appears we may have one of more new API’s at work as well as some new options to help with service management and build automation.

What next?

There’s additional poking around I need to do with these new features. But there’s some great promise here. Out of the box, developers managing one or two accounts are going to see HUGE benefits. For devs in large, highly structured and security restricted shops, they’re more likely to keep to the existing mechanisms or looking at leveraging this to enhance their existing automated processes.

Meanwhile, I’ll keep poking at this a little bit as well as the other new features of this SDK and report back when I have more.

But that will have to wait until next time. Smile

SQL Azure Throttling & Error Codes (Year of Azure–Week 18)

Ok, last week was weak. Hopefully my grammar is correct. So I want to make it up too you by giving you some meat this week. I’ve been involved lately in several discussions regarding SQL Azure capacity limits (performance, not storage) and throttling. Admittedly, this little topic, often glanced over, has bitten the butts of many an Windows Azure project.

So lets look at it a little more closely. Shall we?

Types of Throttling

Now if you go read that link I posted last week on SQL Azure Error Messages, you’ll see that there are two types of throttling that can occur:

Soft Throttling – “kick in when machine resources such as, CPU, IO, storage, and worker threads exceed predefined thresholds”

Hard Throttling – “happens when the machine is out of a given resource”

Now SQL Azure is a multi-tenant system. This means that we have multiple tenants (databases) that will sit on the same physical server. There cold be several hundred databases that share the same hardware. Now SQL Azure will use soft throttling to try and make sure that all these tenants get a minimum number of resources. When a tenant starts pushing those limits, soft throttling will kick in and the SQL Azure fabric will try to move tenants around to rebalance the load.

Hard throttling means no new connections. You’ve maxed things out (storage space, worker process, cpu) and drastic steps should be taken to free those resources up.

SQL Azure Resources

Also in that article, we find the various resource types that we could get throttled on:

  • Physical Database Space
  • Physical Log Space
  • LogWriteIODelay
  • DataReadIODelay
  • CPU
  • Database Size
  • Internal
  • SQL Worker Threads
  • Internal
    Now when an error message is returned, you’ll have throttling types for one or more of these resources. The type could be “no throttling”, could be “soft”, or it could be “hard”.

Throttling Modes

Now if all this wasn’t confusing enough, we have three types of throttling. Well technically four if you count “no throttling”.

Update/Insert –  can’t insert/update, or create anything. But can still drop tables, delete rows, truncate tables, read rows.

All Writes – all you can do is read. You can’t even drop/truncate tables.

Reject All – you can’t do anything except contact Windows Azure Support for help.

Now unlike the throttling type, there is only one mode returned by the reason.

But what can we do to stop the errors?

Now honestly, I’d love to explain Andrew’s code to you for deciphering the codes. But I was never good with bitwise operations. So instead, I’ll just share the code and along with a sample usage. I’ll leave it to folks that are better equipped to explain exactly how it works.

The real question is can we help control throttling? Well if you spend enough time iterating through differing loads on your SQL Azure database, you’ll be able to really understand your limits and gain a certain degree of predictability. But the sad thing is that as long as SQL Azure remains a shared multi-tenant environment, there will always be situations where you get throttled. However, those instances should be a wee bit isolated and controllable via some re-try logic.

SQL Azure is a great solution, but you can’t assume the dedicated resources you have with on-premises SQL Server solution. You need to account for variations and make sure your application is robust enough to handle intermittent failures. But now we’re starting down a path of trying to design to exceed SLA’s. And that’s a topic for another day. Winking smile

SQL Azure Error Codes (Year of Azure–Week 17)

Ok, I’m no counting last week as a ‘Year of Azure’ post. I could,but I feel it was even too much of a softball for me to bare. Unfortunately, I was even less productive this week in getting a new post out. I started a new client and the first weeks, especially when travelling are horrible for me doing anything except going back to the hotel and sleeping.

However, I have spent time the last few week working over a most difficult question. The challenges of SQL Azure throttling behaviors and error reporting.

Reference Materials

Now, on the surface SQL Azure is a perfect wonderful relational database solution. However, when you begin subjecting it to a significant load, its limitations start becoming apparent. And when this happens, you’ll find you get back various error codes that you have to decode.

Now, I could dive into an hours long discussion regarding architectural approaches for creating scalable SQL Azure data stores. A discussion mind you which would be completely enjoyable, very thought provoking, and for which I’m less well equipped then many folks (databases just aren’t my key focus, I leave those to better…. er…. more interested people *grin*). For a nice video on this, be sure to check out the TechEd 2011 video on the subject

Anyways…

Deciphering the code

So if you read the link on error codes, you’ll find that there’s several steps that need to be decoded. Fortunately for me. While I have been fairly busy, I have access to a resource that wasn’t. One fairly brilliant Andrew Espenes. Now Andrew was kind enough to take on a task for me and look at deciphering the code. And in a show of skill that demonstrates to me I’m becoming far older then I would like to believe,

Anyways, pulled together some code that I wanted to share. Some code that leverages a technique I haven’t used since my college days of developing basic assembly (BAL) code. Yes, I am that old.

So lets fast forward down the extensive link I gave you earlier to the “Decoding Reason Codes” section. And our first stop will actually be adjust the reason code into something usable.  The MSDN article says to apply modulo 4 to the reason code:

ThrottlingMode = (AzureReasonDecoder.ThrottlingMode)(codeValue % 4);

Next determine the resource type (data space, CPU, Worker Threads, etc…):

int resourceCode = (codeValue / 256);

And finally, we’ll want to know the throttling type (hard vs. soft):

int adjustedResourceCode = resourceCode & 0×30000;
adjustedResourceCode = adjustedResourceCode >> 4;
adjustedResourceCode = adjustedResourceCode | resourceCode;
resourceCode = adjustedResourceCode & 0xFFFF;
ResourcesThrottled = (ResourceThrottled)resourceCode;

Next Time

Now I warned you that I was short on time, and while I have some items I’m working on for future updates I do want to spend some time this weekend with family. So I need to hold some of this until next week when I’ll post a class Andrew created for using these values and some samples for leveraging them.

Until next time!

Follow

Get every new post delivered to your Inbox.

Join 1,076 other followers