Azure Storage – Hands on with Queues, Part 3

I started drafting this when my son’s little league practice was cancelled due to rain. I loaded him down with horrible chores and put my daughter on laundry detail all so I could get this next update out. Family first? Not for this committed techno-geek. Its about the blog and sharing my excitement with you.

Just don’t tell my wife. ok?

In part 1 of this series we covered the basics of making a REST based call to Azure storage. In part 2 we played with variations of our first REST call to perform several other functions. In this final part I’m going to wrap things up and by showing two final variations and we’ll create a web role queue sample project.

The same disclaimers I’ve stated in previous articles apply. I’m no REST expert and doing my best to keep these samples as simple as possible. So please don’t expect best practices or elegant solutions to simple problems.

Deleting Messages

If you did as I suggested in part 2 of this series, you played around a bit with peeking versus reading messages from queues. And if you did that, you noticed what when you read a message from a queue, you get back two nodes/columns that are not present when peaking and that you didn’t set when you put the message onto the queue: PopReceipt and MessageId

When a message is read from the queue, that message will be hidden from any other processes reading the queue until the TimeNextVisible timestamp has passed. By default, this value will be 30 seconds from when the message was read. If you look at the Azure Queue API on MSDN you will see we could have added an optional query string parameter, visibilitytimeout,  to our example that would have allowed us adjust this period from 0 seconds to 2 hours. Feel free to go back and add this in, it all depends on how fast you can copy/paste when we get to testing our new method.

The next value, PopReceipt, is even more important. When we process a queue message, we will need to remove it from the queue permanently. To perform this operation we’ll need the PopReceipt as well as the MessageId values for the message to be removed.  This sets us up for our next new method, the Delete method.

I received some positive feedback on the bulleted lists I used last time to describe the variations of the methods so I’m sticking with it for now. Here’s the highlights for our new “delete messages” method:

  • our method will accept two string parameters: the PopReceipt and MessageId
  • the request operation will be a “DELETE”
  • the URI will use the queue name and the path “/messages/<MessageId>”
  • the PopReceipt will be specified via a query string parameter
  • we’ll look for two possible success status codes from the resulting HttpWebResponse

I want to start by discussing our URI. As you’ll notice we’re addressing our request to the queue, its messages, and finally one message in particular. The final part of this comes from the passed in MessageId. You would have thought that the message id would have been a query string value, this would have made sense to me as well. However, we need to remember that we’re addressing our request to a particular spot in storage. So setting our address to the specific message makes sense.

We then add on our PopReceipt as a query string value. The result of these two operations in our AzureQueue class looks like this:

            string tmpQueryParms = String.Empty;
            tmpQueryParms = "?popreceipt=" + _Receipt.Trim();

            string tmpURI = this.URI + "/messages/" + _MessageID.Trim() + tmpQueryParms;

Of course, _Receipt and _MessageID are the parameters passed into our method.

Next up, we’ll create the request object and sign it. Same operations we’ve done several times before, except that this time we’re using the DELETE http request operation.

Now all that’s left is to execute our request and check the response:

        using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
        {
            if (response.StatusCode == HttpStatusCode.NoContent)
                result = true; // successfully deleted
            else if (response.StatusCode == HttpStatusCode.NotFound)
                result = true; // message was already deleted
            else
                result = false; // no such message found or other error
        }

Note that we have two positive results that can result. If the response code was “NoContent”, then the message was still there and it was successfully deleted. If the result was a “NotFound”, then the message had been removed before we made our request. If this is the case, we may actually want code in place to handle things if the message was processed multiple times.

This is where things can get interesting with the interaction between multiple subscribers (processes reading from a queue). If two processes both read a message, each will get a different PopReceipt value. Each time a new PopReceipt is generated, the old one expires. So if you are unable to delete a message, this means that another process has read the message (or that maybe someone deleted all the message from the queue). Regardless, it will be up to you to decide how best to address the situation. I can only recommend you plan carefully.

Queue Depth

Anyone that has worked with queues previously knows that there is one very important indicator of system health, queue depth. Queue depth is simply the number of messages that exist within a queue at any given time. If the number climbs above a given threshold, this could indicate that there is a performance problem in the queue subscribers or that more subscribers are needed. If the queue remains nearly empty, it may be that you have more queue subscribers than are required.  In either case, you’re going to want to be able to poll your queues to see what their depths are so you can act accordingly.

To that end, we’re going to create a new method for our AzureQueue class, GetQueueDepth. Here are the highlights:

  • the URI will use the queue name only
  • the request operation will be a “GET”
  • we’ll use a query string parameter to tell the queue what operation we’re performing
  • we’ll read a response header to get the value

Most of these we’ve done before, so I’ll just post my snip-it setting up the request:

            string tmpURI = this.URI + "?comp=metadata";

            // create base request object and modify
            HttpWebRequest request = AzureQueue.getBaseRequest(tmpURI, "GET"); 

            // create the canonized string we're going to sign
            string StringToSign = AzureQueue.getBaseCanonizedString(request);
            StringToSign += "?comp=metadata"; // required by the signature

            // sign request
            SignRequest(request, StringToSign);

So we take the queue object’s URI which already contains our path style URI with the account name and queue name and append our query string parameter on it. In this case we’re tell the queue we want its meta data. When creating the canonized string for our signature, we need to make sure we include this parameter in the string. We last did this when we were retrieving a list of the queues in our account (which also used the “comp” parameter). The “comp” parameter means we’re requesting a component of the targeted resource (in this case the queue) and according to the Azure Storage Authentication document on MSDN, we need to make sure we include it. 

Next up we simply execute our request, check the response status, and get our header from it:

                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    if (response.StatusCode == HttpStatusCode.OK)
                    {

result = Convert.ToInt32(response.Headers["x-ms-approximate-messages-count"],

CultureInfo.InvariantCulture);

                    }

                    response.Close();
                }

I suppose I should do a couple checks to make sure the header is there. But I did test against an empty queue and it comes back with zero. So in a nuthsell there we have it. :)

Where to next, a sample implementation

So at this point, there are still other operations we could explore: delete queues, setting queue metadata, clearing queues. There are also other options within the operations we’ve explored that we could get into. But instead I figured I’d toss in a quick sample implementation that you can use to play with queues a bit.

Our sample application is going to be rigged to put messages into a queue at given intervals, getting messages back out of queues, and peeking at the contents. We’ll use a couple timer controls, a gridview, and a couple dropdownlists and checkboxes. The implementation will be pretty simple as well:

  • create the queue on page load and get an initial list of messages
  • start inserting messages into the queue at 5 second intervals, update message list each time
  • allow the user to initiate when messages start being removed from the queue
  • allow the user to adjust the intervals at which messages are put into or gotten off the queue

So here’s the snip-its…

On Page_Load

     if (myQueue == null)
     {
         myQueue = AzureQueue.Create("tstQueue");
         getMessages();
     }

The variable myQueue is a private class instance of our AzureQueue object. We use the static Create method of AzureQueue to get an instance of the object, and finally we call a new private method I’m going to create that will get up to 32 messages (the maximum allowed by a single get operation) from the queue. We could check to make sure an instance of the object was created, but as we’ve already stated, I’m here to demonstrate queue access, not best practices. :) Laziness is only laziness if you’re doing it to avoid expending effort. I have a reason for taking a shortcut!

Here’s the getMessages method:

        protected void getMessages()
        {
            DataSet tmpMessageList = myQueue.Get(32, true);

            if ((tmpMessageList != null) && (tmpMessageList.Tables.Count > 0))
            {
                gvMessages.DataSource = tmpMessageList;
            }
            else
                gvMessages.DataSource = null;
            gvMessages.DataBind();
        }

I’m hopeful you’ll recognize this from our implementation in part 2 of this series. We’re peeking at the queue, and getting up to 32 messages. I then bind them to a gridview control. Course we won’t have anything to display unless we simulate messages being put on the queue.

        protected void TimerPut_Tick(object sender, EventArgs e)
        {
            myQueue.Put("Msg Inserted at: " + DateTime.Now .ToLongTimeString());

            // update list of messages
            getMessages();
        }

I don’t think I need to explain that this is tick event of our put timer. It inserts a message into our queue at a given interval. Then we just update the dataset that our gridview is bound too.

This leaves only one final operation, reading (or popping in queue terms) messages off the queue. We’ll do a non-peek get for 1 message, and get its MessageId and PopReceipt. Then we’ll turn around and issue our Delete request. Finally, we have to again update our list of messages for display:

        protected void TimerGet_Tick(object sender, EventArgs e)
        {
            DataSet tmpMessageList = myQueue.Get(1, false);
            if ((tmpMessageList != null) && (tmpMessageList.Tables.Count > 0))
            {
                myQueue.Delete(
                    tmpMessageList.Tables[0].Rows[0]["MessageId"].ToString(), 
                    tmpMessageList.Tables[0].Rows[0]["PopReceipt"].ToString());
            }

            // update list of messages
            getMessages();
        }
And that’s all the key items to our sample implementation. There’s a few other events that need hooked up, mainly for enabling/disabling our timers and setting their frequency. But I’m certain those need no explanation. I will close wish a quick look at my implementation.

queue_sample

What will happen as this runs is that the top message will “pop” off, and a new message will appear at the bottom. You can see the impact of the queue depth by enabling/disabling inserts or reads as well as adjusting the frequency. Admittedly this isn’t a very practical example (unless you need a database style “stack” collection), but it does a fair job of illustrating the usage of a queue.

You are the queue master, you have mastered the queues!

Well, this concludes my series on accessing queues. I want to again salute the “Red Dog” team and their StorageClient sample project. Its a very robust implementation and some great thought went into it. I hope that what I’ve given you here is a simple, straightforward set of examples for creating your own queue access implementation. Perhaps it will be a robust solution like the StorageClient. Or maybe you’ll go for something more lightweight.

In either case, thanks for taking the time to read these postings. I’ve uploaded a zip file containing my solution so you can pull it down and try running it yourself.

Till next time and thanks for stopping by!  Now how is that Windows 7 RC downloading going….. oh wait, its family time! :)

Azure Storage – Hands on with Queues, Part 2

How’s this for commitment, its 6am on a Saturday morning and I’m sitting here watching movies on my 360 and putting together part 2 in this series of articles on Queues. I’m here for ya, have no doubt about that.

In part 1 we covered the basics of a queue storage REST call. We constructed our URI, built an HTTPWebRequest, and then digitally signed the request. Lastly, we executed the request and created a queue. In this installment, we’re going to extend this by creating 3 additional requests: get a list of queues, put a message into a queue, and read/peek at messages.

If you missed part 1, please go back and check it out. In addition to the information on generating our initial request, that post also includes all the disclaimers about the approach I’m taking. In short, I’m no REST expert and I’m intentionally keeping the implementation simple so I can focus on the mechanics of making the low level call.

Adjustments to our current AzureQueue class

Before we begin, I want to make some changes to the class we started last time. Last time we imbedded three functions that we’ll want to reuse in all our new methods. So I’m breaking those out into their own methods with some minor modifications. What can I say, my love of reuse stems from a deep seated laziness. Separating out this common code will also help highlight the differences between the various calls we’re making.

First up is a method that will create the base HttpWebRequest object and set the initial properties. It will accept two parameters, the URI we’re calling and the method of the Request (get, post, put, etc). We’re making this and the other methods static because they don’t need any instanced values of the parent class and so that they can easily be called from any other static class methods.

        private static HttpWebRequest getBaseRequest(string _URI, string _Operation)
        {
            HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(_URI); // create queue
            request.Timeout = (int)TimeSpan.FromSeconds(30).TotalMilliseconds; // 30 second timeout
            request.ReadWriteTimeout = request.Timeout; // same as other value
            request.Method = _Operation; // we want to create a queue
            request.ContentLength = 0;
            request.Headers.Add("x-ms-date", DateTime.UtcNow.ToString("R", CultureInfo.InvariantCulture)); // always use UTC date/time

            return request;
        }

Next up is a method for creating the basic canonized string. I’m adjusting this slightly so that the string is created from a passed in HttpWebRequest object.

        private static string getBaseCanonizedString(HttpWebRequest _Request)
        {
            string StringToSign = _Request.Method + "\n" +             // "VERB", aka the http method
                String.Empty + "\n" +                                  // "Content-MD5", we not using MD5 on this request
                _Request.ContentType + "\n" +                          // "Content-Type"
                String.Empty + "\n" +                                  // "Date", we know this is here because we added it
                "x-ms-date:" + _Request.Headers["x-ms-date"] + "\n" +  // "CanonicalizedHeaders", we only have one, so we'll do it manually
                "/" + myAccountName + _Request.RequestUri.AbsolutePath;// "CanonicalizedResources", just storage account name and path for this request

            return StringToSign;
        }

And lastly, I want a method that generates the signature and attaches it to the request object. We’ll pass in the HttpWebRequest and string objects. Since the HttpWebRequest is a class and not a simple type, its by referrence so any changes we’re making will be available outside the method. This chunk is taken exactly from the original one with the exception of some minor variable renaming.

        private static void SignRequest(HttpWebRequest _request, string _CanonizedString)
        {
            // compute the MACSHA signature
            byte[] KeyAsByteArray = Convert.FromBase64String(myAccountKey);
            byte[] dataToMAC = System.Text.Encoding.UTF8.GetBytes(_CanonizedString);
            string computedBase64Signature = string.Empty;

            using (HMACSHA256 hmacsha1 = new HMACSHA256(KeyAsByteArray))
            {
                computedBase64Signature = System.Convert.ToBase64String(hmacsha1.ComputeHash(dataToMAC));
            }

            // add the signature to the request
            string AuthorizationHeader = string.Format(CultureInfo.InvariantCulture, "SharedKey {0}:{1}",
                myAccountName, computedBase64Signature);
            _request.Headers.Add("Authorization", AuthorizationHeader);
        }

Now we just have to make some minor adjustments to only Create method and we’re all ready to go. Here’s a snip-it of the updated method.

        public static AzureQueue Create(string QueueName)
        {
            bool result = false;
            AzureQueue tmpQueue = new AzureQueue(QueueName); // create base object

            // create base request object and modify
            HttpWebRequest request = AzureQueue.getBaseRequest(tmpQueue.URI.ToString(), "PUT"); // create queue

            // create the canonized string we're going to sign
            string StringToSign = AzureQueue.getBaseCanonizedString(request);

            // sign request
            SignRequest(request, StringToSign);

Now we’re ready to get to work on expanding our queue functions.

Getting a list of available queues

I don’t know about you, but I always like to be able to get a list of queues that are available. So we’ll start expanding our queue storage functionality here. There is only a few differences in this request over the one we created previously:

  • our URI does not include the queue name, we’re address our request to the “queue manager”
  • the request operation will be a “GET” instead of a “POST”
  • we need a query string to tell the queue manager what action we’re performing
  • we need to read the response stream to get our list of queues

The URI of our last request was a path style URI that looked like “http://127.0.0.1:10001/<accountname>/<queuename>”. Since we’re still accessing development storage, we’ll continue to use a path style URI, but this time we’re accessing the account and not a specific queue. We also want to add a couple parameters that help describe the request we’re making. Our new URI is “http://127.0.0.1:10001/<accountname>/?comp=list&maxresults=10”. The most important part here is the “comp” parameter. This tells the queue account manager that we’re wanting a list of queues. Using operational parameters such as “maxresults”, we can scope what we’re getting back the service. If you refer to the Azure "list queues" API on MSDN you’ll get a list of the other parameters.

So here’s the start of our new method for getting a list of queues. We’ll use the new URI and we’ll also change our operation from a “PUT” to a “GET”.

        public static DataSet getQueueList()
        {
            DataSet result = null;
            string tmpURI = myURI + myAccountName + "/?comp=list&maxresults=10";

            // create base request object and modify
            HttpWebRequest request = AzureQueue.getBaseRequest(tmpURI, "GET"); // create queue

            return result;
        }

Next we’ll create the canonized string and digitally sign our base request. Note how unlike our create queue example, we want to include the “?comp=list” query parameter as part of the canonized string.

            // create the canonized string we're going to sign
            string StringToSign = AzureQueue.getBaseCanonizedString(request);
            StringToSign += "?comp=list"; // required by the signature

            // sign request
            SignRequest(request, StringToSign);

Request is created, signed, and we’re ready to fire it off. Time for another quick disclaimer. The response to this request is an XML document. I have enough I want to cover in this post that I’m not going to walk you through its form. If you really want to see the schema of the request, you can find it on MSDN or just alter the code I’m giving you to save the doc to a string so you can peek at it.

We execute our request and make sure we get an “OK” response back. When inserting or creating something, this is all we needed to do. But now, we have to get a response stream and use that to read the xml result into a dataset object that is returned from the method.

                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    if (response.StatusCode == HttpStatusCode.OK)
                    {
                        using (Stream stream = response.GetResponseStream())
                        {
                            result = new DataSet();
                            result.ReadXml(stream);
                            stream.Close();
                        }
                    }
                    else
                        result = null;
                    response.Close();
                }

Implementing our new method is deceptively simple. What I did is to put a dropdownlist control on a web page and then populate it using the results of our new method. Here’s what that looked like:

                DataSet tmpQueueList = AzureQueue.getQueueList();
                if ((tmpQueueList != null) && (tmpQueueList.Tables.Count >= 3))
                {
                    ddlQueues.DataTextField = "QueueName";
                    ddlQueues.DataSource = tmpQueueList.Tables[2]; // our list is the 3rd table
                    ddlQueues.DataBind();
                }

This is going to come in handy for our next two calls where we start putting messages onto and reading them from queues. So keep it handy.

Putting a message onto a queue

This call will be almost the opposite of the last one. We’ll construct a message payload and using a stream, feed it up to the service. Here’s the highlights:

  • our URI uses the queue name and the path “messages”
  • the request operation will be a “POST”
  • we’ll create the outbound message payload
  • send the message to the queue using the request stream

First we’ll create our message payload. We do this first because we’ll need its length when we create the request object. This snip-it is the construction of our message payload. We have to wrap the actual payload in a couple of XML tags and I’m just inserting a string into the middle of it that for our example. We’re also making it byte array because it should be UTF8 encoded and because when we stream it up to the service we’ll need it as an array anyways.

            // transform message payload
            System.Text.UTF8Encoding enc = new UTF8Encoding(false);

byte[] baMessage = enc.GetBytes("<QueueMessage><MessageText>" + _message.Trim() +

"</MessageText></QueueMessage>");

 

 

 

Next we’ll create our request object and sign the request as follows:

            // create base request object and modify
            HttpWebRequest request = AzureQueue.getBaseRequest(this.URI + "/messages", "POST"); // create queue
            request.ContentLength = baMessage.Length;

            // create the canonized string we're going to sign
            string StringToSign = AzureQueue.getBaseCanonizedString(request);

            // sign request
            SignRequest(request, StringToSign);

There are two things I want to call out in this. First off is our URI, we’re going to use the queues base uri and then add on the “/messages” part so that the service knows were after the queue’s contents. Next up, you’ll notice we’ modify the request after it is created, using the length of the byte array that contains our message payload.

Lastly, we’re going to execute our request…

                // send request
                using (Stream requestStream = request.GetRequestStream())
                {
                    requestStream.Write(baMessage, 0, baMessage.Length);
                    requestStream.Close();
                    using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                    {
                        result = (response.StatusCode == HttpStatusCode.Created);

                        response.Close();
                    }
                }

So using our request object, we get a stream, write our message to it, then execute the request. We need to check the return code to make sure the message posted properly. Not much to it really. You can optionally add in a time to life for the request (in seconds), but again, we’re just trying to keep these examples simple.

Reading and Peeking at messages in the queue

Now is where things start to get kind of fun. At least for a geek like me. This is also the first of these calls I got to work the first time! It was extremely satisfying to not have another of those “Forbidden” web exceptions.

Here’s the highlights of our new “read messages” method:

  • our URI uses the queue name and the path “messages”
  • the request operation will be a “GET”
  • use query string parameters to describe the number of messages and if we’re only “peeking”
  • read the response stream to get our messages

Getting or peeking at messages will combine several of the techniques we’ve used previously. We’ll use the same URI we used last time, put a couple query parameters on it, and finally execute our request with the response stream load a dataset object.  Lets start by preparing our URI with its query parameters.


            string tmpQueryParms = String.Empty;
            tmpQueryParms = "numofmessages=" + _NumOfMessages.ToString(CultureInfo.InvariantCulture);
            if (_peek)
            {
                if (tmpQueryParms.Length > 0)
                    tmpQueryParms += "&";
                tmpQueryParms += "peekonly=" + _peek.ToString(CultureInfo.InvariantCulture);
            }

            string tmpURI = this.URI + "/messages?" + tmpQueryParms;

The base URI is the same as in our last request. But we’re adding two parameters to it. First off, the “numofmessages” will tell us how many we want to retrieve. Next, we have an optional parameter “peekonly” which if the boolean variable I’m checking is set to true, will cause us to only peek at the messages. Time for a quick tangent…

For those reading this that may not be familiar with the concept of message queues, a queue is intended as a FIFO (first in, first out) method of sending message between two processes. Think of it as an early form of service bus. One process puts a message into a queue and one or more will pull it back out. The pulling process has the option of reading the message (making it unavailable to any other process for a given time), or just peeking at it. A “peek” doesn’t remove the message’s visibility from the queue.

Ok, back to task. After creating our URI, we’ll create the request and sign it just as we have before. We’ve done this several times already so I won’t bore you with the code yet again. We don’t need to worry about inserting any of our query string parameters like we did when we got the queue list, so creating our message signature is also straightforward. After those steps are performed, all that’s left is to execute our response.

                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    if (response.StatusCode == HttpStatusCode.OK)
                    {
                        using (Stream stream = response.GetResponseStream())
                        {
                            result = new DataSet();
                            result.ReadXml(stream);
                            stream.Close();
                        }
                    }
                    else
                        result = null;

                    response.Close();
                }

Yes, you’re seeing this right. This is exactly like we did when we got the list of queues earlier. And like that example, I’m going to bind the result set to a control so I can see the results. It allows me to display the results easily. Here’s our implementation example:

            AzureQueue tmpQueue = new AzureQueue(ddlQueues.SelectedValue);
            DataSet tmpMessageList = tmpQueue.Get(10, cbPeekOnly.Checked);

            if ((tmpMessageList != null) && (tmpMessageList.Tables.Count > 0))
            {
                gvMessages.DataSource = tmpMessageList;
            }
            else
                gvMessages.DataSource = null;

In my case, I have a button that will read the queue specified by a dropdownlist control and return up to 10 messages from that queue. I’ve also put in a checkbox that I can use to determine if I’m going to peek at the messages or actually read them. Finally, I find the messages to a grid view control to display the results. The columns that are returned will vary depending on if we’re peeking or actually reading. The reason for these changes will be apparent to you when we get to part 3 of this series.

In our next episode, processing the messages

Well that’s it for this time. I already have the code snip-its for my next post completed so I just need to get the blob post written. In my next article I’m going to finally dig into processing these messages and removing them from the queue. I’ll even make a slight detour and show you how to check our queue depth.

Meanwhile, I’ve again uploaded the new version of our AzureQueue class. I know I skipped around the code a bit so I encourage you to check it out and play around with your own implementation of it. Especially look at the results from peeking versus reading the messages from queues.

Till next time!

PS – I would really like to hear from anyone that’s reading this blog. Feedback on the content or style is always appreciated.

Azure Storage – Hands on with Queues, Part 1

This is what I get for wanting to understand the basics. I spent about 4 hours reverse engineering the StorageClient sample project and reconciling it against the Azure Storage Queue API. I do want to give a salute to the Azure team’s sample project. Its a great example and I have little doubt that many shops will simply adopt it as their standard client. And while I understand the need to promote the platform agnostic nature of consuming storage via its REST api, I am still a bit disappointed we’re not hearing more about an “official” .NET library for access.

Before I dive in there’s a couple things I’d like to make perfectly clear. First off, I don’t have much practical experience with doing REST based calls. I’ve been fortunate enough that most of the time I have someone on the team that really enjoys doing those types of things so I’m able to dodge it. Secondly, the approach I’m going to show you is a bit more complex then just building the request via a stringbuilder but also not as flexible as the StorageClient sample project.

I’ll also admit that my example is going to be very crude, and extremely basic. Just please keep in mind that the goal for this isn’t to enforce/promote best practices but to give you a straightforward example of making these rest based calls.

Ok, on to business. The process of creating a rest call is actually pretty straight forward:

  1. Construct the URI
  2. build the appropriate HTTPWebRequest
  3. digitally sign the request using HMACSHA256 (Message Authentication Code, Secure Hash Algorithm)
  4. call the HTTPWebRequest.GetResponse method
  5. close the response and release resources.

Creating the Solution

Start by firing up Visual Studio and creating a new “Web and Worker Cloud Service” project. I’ve named mine “QueueDemo”. And because I’m going to want to access my Azure Storage classes from both the Web and Worker roles, I’ll add a class library called “StorageDemo”. If you use the “Class Library” template, we’ll want to also add the following references: Microsoft.ServiceHosting.ServiceRuntime (Windows Azure Service Hosting namespace), and System.DataServices.Client (for accessing ADO.NET Data Services). Lastly, rename the default class to “AzureQueue”.

At this point, I’m not 100% certain we need the ServiceHosting reference, but I’m adding it because we are doing an Azure application and the StorageClient sample uses it. Later I’ll play around and see if I really need it. I also left out a reference to System.Configuration. While best practices tell me to put my account information into a configuration file, I want to do a seperate blog post at a later date on configuration options in Azure, so for the moment I’m just going to hard-code my credentials. Also, our sample project will only work against development storage for now. We’ll switch this later.

Now, enhance the AzureQueue class in our web role project as seen below:

    public class AzureQueue: Object
    {
        public static AzureQueue Create()
        {
        }
    }

We’re going to start our hands on journey with creating a queue. We’ll extend this initial example over several more articles in the coming weeks.

Generate the URI

Regardless of the operation we’re performing, we need to know the URI that we’ll be using. Using the host, our account name, and a handful of optional parameters we can create two types of URI’s. For local development storage we’ll use a “path style” URI. Hosted storage uses a “host style” URI. The primary difference between the two is where the account name will end up.

Path Style (development): http://127.0.0.1:10001/<accountname>/<queuename&gt;

Host Style: http://<accountname&gt;.queue.core.windows.net/<queuename>

There’s also the option of appending query string parameters to these URI’s. The StorageClient sample project includes a “timeout” parameter for the queue timeout. We’ll leave that one off our example for the moment.

To handle this, lets add a private string to our AzureQueue class to store the proper value for our example. Mine will look like this:

	private static string myURI = "http://127.0.0.1:10001/";

Note that I'm leaving both the account name and queue name off so I can append them appropriately for each queue I need to create. I made it static just to keep things easy for me when I'm using this value later.

We’ll also add two private static values that hold our account name and key. Since we’re access development storage, these are the same for EVERYONE. As I mentioned earlier, we’ll discuss putting these values into a configuration file and accessing them from there in another article.

Next up, we’re going to add a private string to hold my QueueName, a public property that exposes it, and finally we need to enhance the class constructor so that we assign the parameter to my private value. We're also going to add a property that generates our path style URI for us using the account and queue names.

The updated class looks like this:

    public class AzureQueue: Object
    {
        private static string myURI = "http://127.0.0.1:10001/";
        private static string myAccountName = "devstoreaccount1"; // same for everyone
        private static string myAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
        private string myQueueName = string.Empty;

        public string Name
        {
            get { return myQueueName.Trim(); }
        }

        public Uri URI
        {
            get { return new Uri(myURI + myAccountName + "/" + Name.Trim()); }
        }

        public AzureQueue(string QueueName): base()
        {
            myQueueName = QueueName.Trim();
        }

        public static AzureQueue Create(string QueueName)
        {
        }
    }

Create the HTTPWebRequest

That takes care of setting the stage for us to execute our request. So we'll start filling in the Create method we just added as this is where the actual execution of our REST call will happen. We'll start this by creating our HTTPWebRequest object as follows:

  • create an instance of the System.Net.HttpWebRequest class using our generated/composite URI property
  • set the Timeout and ReadWriteTimeout property to our Timeout in Milliseconds
  • set the appropriate http method (put, get, etc… more in a few on this)
  • set the content length (which is zero for this operation)
  • add a header for the date/time

For other Azure Storage requests, we may also have some request headers that may need to be set. But for this initial example, this is all we need.

Now about the http method. Thes HTTP methods help control the type of operation we’re performing on containers, or for this example, our queue. You can find all these in the MSDN API reference, but here’s a short list of methods for queues:

PUT

creates a queue or sets queue metadata

GET

used to retrieve a list of queues, a message from a queue, or peek at a message

DELETE

removes a queue, a message, or clears all messages from a queue

POST

puts a message on a queue

GET/HEAD

retrieves meta data about a queue
 
The exact operation performed will depend on the URI used and in most cases additional parameters and/or a request body. Fortunately, creating a queue requires none of these.
 
Here’s my updated create method with our HttpWebRequest object's initial setup:
   public static AzureQueue Create(string QueueName)
        {
            bool result = false;
            AzureQueue tmpQueue = new AzureQueue(QueueName); // create base object

            // create our web request
            HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(tmpQueue.URI); // create queue
            request.Timeout = (int)TimeSpan.FromSeconds(30).TotalMilliseconds; // 30 second timeout
            request.ReadWriteTimeout = request.Timeout; // same as other value
            request.Method = "PUT"; // we want to create a queue
            request.ContentLength = 0;
            request.Headers.Add("x-ms-date", DateTime.UtcNow.ToString("R", CultureInfo.InvariantCulture)); // always use UTC date/time

            // return the result
            return (result?tmpQueue: null);
        }
    }

We could do a few extra operations before we call this, such as making sure the queue doesn’t already exist in storage. But you’ll see later that this isn’t always necessary.

Update May 13th: I’d like to point out that in the development storage, queue names can be a mix of upper and lower case. However, when accessing hosted azure storage, upper case is not allowed. Be sure to adhere to the API rules for queue names if you plan to deploy.

Oh, and don’t forget to add some using clauses for the System.Net and System.Globalization namespaces. :) Helps avoid nasty red lines when compiling.
 

Sign our request on the dotted line

Ok, we have our URI, we’ve built a request object, but because we’re being nice and secure, we need to make sure we sign our request using MACSHA (Message Authentication Code, Secure Hash Algorithm – yeah, I looked it up). This occurs in two stages, canonizing the request, and computing the digital signature.

If you look at the SampleStorage project, they have some excellent bits of code written that will handle the canonization for us. You can find this looking at the MessageCanonicalizer class found in the Authentication.cs file. If you like, you can just grab that code and implement it to do the work for you. However, to be more in keeping with the intent of this article, we’re going to grab a string that is shown in the Authentication Schemes API reference found on MSDN.

StringToSign = VERB + “\n” + 
      Content-MD5 + "\n" +
      Content-Type + "\n" +
      Date + "\n" +
      CanonicalizedHeaders + "\n" +
      CanonicalizedResource;

The resulting code will be inserted into our Create method right above the return. Here’s what I ended up with:

            // create the canonized string we're going to sign
            string StringToSign = request.Method + "\n" +              // "VERB", aka the http method
                String.Empty + "\n" +                                  // "Content-MD5", we not using MD5 on this request
                request.ContentType + "\n" +                           // "Content-Type"
                String.Empty + "\n" +                   	       // "Date", this is a legacy value and not really needed
                "x-ms-date:" + request.Headers["x-ms-date"] + "\n" +   // "CanonicalizedHeaders", we only have one, so we'll do it manually
                "/" + myAccountName + tmpQueue.URI.AbsolutePath;       // "CanonicalizedResources", just storage account name and path for this request

With the canonized string created, we next need to generate the MACSHA hash and attach that calculated signature to our message. This is pretty straight forward as seen below:

            // compute the MACSHA signature
            byte[] KeyAsByteArray = Convert.FromBase64String(myAccountKey);
            byte[] dataToMAC = System.Text.Encoding.UTF8.GetBytes(StringToSign);
            string computedBase64Signature = string.Empty;

            using (HMACSHA256 hmacsha1 = new HMACSHA256(KeyAsByteArray))
            {
                computedBase64Signature = System.Convert.ToBase64String(hmacsha1.ComputeHash(dataToMAC));
            }

            // add the signature to the request
            string AuthorizationHeader = string.Format(CultureInfo.InvariantCulture, "SharedKey {0}:{1}",
                myAccountName, computedBase64Signature);
            request.Headers.Add("Authorization", AuthorizationHeader);

To review this, we convert the account key private static variable and our canonized string to byte arrays. Using the keys byte array as for our hash seed, we compute the hash on the canonized string. We then add this signature to the request as a header named “Authorization”. The Queue service will then compute the hash just as we have and verify that the two values match before accepting our request.

Execute the HTTPWebRequest and parse the response

So we’ve created our request and created its digital signature. Now all that remains is to execute it and check our response. To do this, I’m going to borrow the code used in the StorageClient sample almost verbatim.

            // execute our request and process the response 
            // (taken almost word for word from the StorageClient sample project)
            try
            {
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    if (response.StatusCode == HttpStatusCode.Created)
                    {
                        // as an addition we could parse the result and retrieve
                        // queue properties at this point
                        result = true;
                    }
                    else if (response.StatusCode == HttpStatusCode.NoContent)
                        result = true;
                    else
                        result = false;
                    response.Close();
                }
            }
            catch (WebException we)
            {
                if (we.Response != null &&
                    ((HttpWebResponse)we.Response).StatusCode == HttpStatusCode.Conflict)
                {
                    result = true;
                }
                else
                    result = false;
            }

And there we have it, all the code necessary to generate our “create queue” request against the Azure Storage service. :) I did mention above that we didn't have to check for the pre-existance of the queue. If you look at our example, there are several options that return true. While its more obvious in the original StorageClient example, one of these is the result of a queue already existing. We could just have easily handled this differently. So the final implementation is up to you.

Putting it all to work

Of course all this code is worthless without some function to access it. I’ve already been wordy enough so I’ll just explain this the quick way… Go to the web role project and add 3 controls to the default.aspx page: a text box, a button, and a label. Name and position them all as you see fit. Wire up the click event of the button and inside that handler create an instance of our Azure Queue using the value entered into the text box for the queue name. Here’s the sample code I used:

            AzureQueue tmpQueue = AzureQueue.Create(tbQueueName.Text.Trim());
            if (tmpQueue != null)
                lblResult.Text = "Queue Created";
            else
                lblResult.Text = "Creation Failed";

If you’ve done everything correctly, you should be a confirmation message. If not, good luck troubleshooting the error. During my initial test run I kept getting a “Forbidden” response from the Azure Storage Services. Turned out it was due to two errors I had made, one on the URI and one on the canonized signature. Took about an hour to work out so I’m hoping over time I get more adept at troubleshooting these types of problems. I can also see that unless things change, its going to be difficult to support these. Ug!

Next time, our intrepid code monkey ventures deeper into the Storage Jungle

I had wanted to cover a bit more but this has already drug on long enough. So I’m going to delay 3 topics until my next post. So tune back in again in about a week and I’ll cover getting a list of the available queues, as well as putting and getting queue messages. All these will be based on the work I’ve started here, so I’ve uploaded a copy of my AzureQueue.cs file for you to check out.

Till then!

Azure Storage – Overview

Sorry to have taken so long to get back to my blog. I’ve been distracted with my personal life and admittedly a bit overwhelmed about the directions I could take with my next topic. I could jump straight into a hands on session, contrast Azure Storage with SQL Data Services, discuss the differences between RDBMS and cloud data storage needs… there’s just so much that could be done.

In the end though, settled on breaking this up into a couple of more easily digestible pieces starting with an overview of what Azure Storage is and how its structured. So here it is…

As I believe I’ve mentioned previously, Azure Storage is not a RDBMS as much as its an abstract of the local file system for Windows Azure. Since Windows Azure has abstracted away the hardware, there no more tossing a file out onto a network share so it can be accessed by multiple processes. Instead, using a Storage Account, we can create different containers into which we can shove types of data. Like almost everything in Windows Azure, this storage is exposed as a series of REST (representational state transfer) based API’s.

So lets start there, with the Storage Account. When I started my web role project, I talked about creating a local storage account. We need to do the same thing in Windows Azure by create a Storage Account project. We give it a name and description and end up with a pile of information that seems overwhelming at first: 3 endpoints and two access keys. The endpoints, relate to the 3 types of entities/things/objects that can be stored: Blobs, Queues, and Tables.

Blob Storage is exactly what it sounds like. You can insert large objects up to 50gb in size (2gb for development storage) and organize them using containers. Each container and blob can have metadata associated with it. We can get a list of containers and iterate through the blobs in containers. We can also read the entire blob, or just a range of bytes within it.

Queue Storage is where we have queues, a common enough concept that needs little explanation. Queues provide for a reliable way of passing messages between processes. While a queue can have an unlimited number of messages stored within it, each message is limited to a maximum of 8k in size. When a process reads a message from the queue, it is expected to process and remove it. once read, the message is hidden from any other processes looking at that queue for a given period of time. If the message is not removed before that interval expires, the message will become visible to all processes once again.

And then there is Table Storage. Tables in Azure Storage are not like the tables you’re used too. Tables are logical contains that can be spread across multiple partitions in storage (for load balancing). Tables contain entities (think rows) which are in turn comprised of properties and their values  (columns). Each row is identified by its Partition Key and Row Key. However, Table Storage does not enforce any schema, its up to the consuming client to enforce any such rules.

That wasn’t so bad, was it. :) Well, nearly everything looks good from 10,000 feet. But I’ll dig into the details of each of these more in the coming weeks/months. But for now, I believe its important to point out a couple differences between Azure Storage and Development Storage. Development storage is not accessible to any process outside of the local machine. Ok, that’s not entirely true as you can use various port forwarding tools to redirect requests. But even then, Dev storage wasn’t built to scale well so I wouldn’t recommend even trying this. Additionally, while I mentioned that Tables do not require a schema, they do when you’re dealing with Development Storage. You also can’t create/drop tables dynamically in development storage. Lastly, the URI’s for access development storage a fixed, unlike in the cloud.

Now, if you’ve read this and are feeling a bit gutsy, you can jump right into using Azure Storage by accessing the sample StorageClient project that comes with the Azure SDK. Or you can also search around the web and find several good blog articles that discuss accessing storage directly via the API. Understanding this approach, and more importantly thinking about ways to use code generation tools to create more traditional style CRUD layers for accessing it is what I’ll be focusing on in my next blog posting.

I promise there won’t be as much time lag between posts as they was this time. So please check back in the near future when I do another “hands on session”, this time with an Azure Worker Role, Queues, and maybe even a Table.

Until then, I’d like to leave you with links to a couple excellent resources:

Follow

Get every new post delivered to your Inbox.

Join 1,148 other followers