Service Bus and “pushing” notifications

I put quotes around the word ‘pushing’ in the title of this post because this isn’t a pure “push” scenario but more of a ‘solicited push’. Check out this blog post where Clemens Vasters discusses the differences and realize I’m more pragmatic then purist. :)

So the last several projects I’ve worked on, I’ve wanted to have a push notification system that I could use to send messages to role instances so that they could take actions. There’s several push notification systems out there, but I was after some simple that would be included as part of my Windows Azure services. I’ve put a version of this concept into several proposals, but this week finally received time to create a practical demo of the idea.

For this demo, I’ve selected to use Windows Azure Service Bus Topics. Topics, unlike Windows Azure Storage queues give me the capability to have multiple subscribers each receive a copy of a message. This was also an opportunity to dig into a feature of Windows Azure I haven’t worked with in over a year. Given how much the API has changed in that time, it was a frustrating, yet rewarding exercise.

The concept is fairly simple. Messages are sent to a centralized topic for distribution. Each role instance then creates its own subscriber with the appropriate filter on it so it receives the messages it cares about. This solution allows for multiple publishers and subscribers and will give me a decent amount of scale. I’ve heard reports/rumors of issues when you get beyond several hundred subscribers, but for this demo, we’ll be just fine.

Now for this demo implementation, I want to keep it simple. It should be a central class that can be used by workers or web roles to create their subscriptions and receive notifications with very little effort. And to keep this simplicity going, give me just as easy a way to send messages back out.

NotificationAgent

We’ll start by creating a class library for our centralized class, adding references to it for Microsoft.ServiceBus (so we can do our brokered messaging) and Microsoft.WindowsAzure.ServiceRuntime (for access to the role environment). I’m also going to create my NotificationTopic class.

Note: there are several supporting classes in the solution that I won’t cover in this article. If you want the full code for this solution, you can download it here.

The first method we’ll add to this is a constructor that takes the parameters we’ll need to connect to our service bus namespace as well as the name/path for the topic we’ll be using to broadcast notifications on. The first of these is creating a namespace manager so I can create topics and subscriptions and a messaging factory that I’ll use to receive messages. I’ve split this out a bit so that my class can support being passed a TokenProvider (I hate demo’s that only use the service owner). But here is the important lines:

TokenProvider tmpToken = TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerKey);
Uri namespaceAddress = ServiceBusEnvironment.CreateServiceUri(“sb”, baseAddress, string.Empty);
this.namespaceManager = new NamespaceManager(namespaceAddress, tokenProvider);
this.messagingFactory = MessagingFactory.Create(namespaceAddress, tokenProvider);

We create a URI and a security token to use for interaction with our service bus namespace. For the sake of simplicity I’m using issuer name (owner) an the service administration key. I’d never recommend this for a production solution, but its fine for demonstration purposes. We use these to create a NamespaceManager and MessagingFactory.

Now we need to create the topic, if it doesn’t already exist.

try
{
// doesn’t always work, so wrap it
if (!namespaceManager.TopicExists(topicName))
this.namespaceManager.CreateTopic(topicName);
}
catch (MessagingEntityAlreadyExistsException)
{
// ignore, timing issues could cause this
}

Notice that I check to see if the topic exists, but I also trap for the exception. That’s because I don’t want to assume the operation is single threaded. With this block of code running in many role instances, its possible that between checking if it doesn’t exist and the create. So I like to wrap them in a try/catch. You can also just catch the exception, but I’ve long liked to avoid the overhead of unnecessary exceptions.

Finally, I’ll create a TopicClient that I’ll use to send messages to the topic.

So by creating an instance of this class, I can properly assume that the topic exists, and I have all the items I need to send or receive messages.

Sending Messages

Next up, I create a SendMessage method that accepts a string message payload, the type of message, and a TImeSpan value that indicates how long the message should live. In this method we first create a BrokeredMessage giving it an object that represents my notification message. We use the lifespan value that is passed in and set the type as a property. Finally, we send the message using the TopicClient we created earlier and do appropriate exception handling and cleanup.

try
{
bm = new BrokeredMessage(msg);
bm.TimeToLive = msgLifespan;
// used for filtering
bm.Properties[MESSAGEPROPERTY_TYPE] = messageType.ToString();
topicClient.Send(bm);
success = true;
}
catch (Exception)
{
success = false;
// TODO: do something
}
finally
{
if (bm != null) // if was created successfully
bm.Dispose();
}

Now the important piece here is the setting of a BrokeredMessage property. It’s this property that can be used later on to filter the messages we want to receive. So let’s not forget that. And you’ll also notice I have a TODO left to add some intelligent exception handling. Like logging the issue.

Start Receiving

This is when things get a little more complicated. Now the experts (meaning the folks I know/trust that responded to my inquiry), recommend that instead of going “old school” and having a thread that’s continually polling for responses, we instead leverage async processing. So we’re going to make use of delegates.

First we need to define a delegate for the callback method:

public delegate bool RecieverCallback(NotificationMessage mesage, NotificationMessageType type);

We then reference the new delegate in the method signature for the message receiving starter:

public void StartReceiving(RecieverCallback callback, NotificationMessageType msgType = NotificationMessageType.All)

More on this later….

Now inside this method we first need to create our subscriber. Since I want to have one subscriber for each role instance, I’ll need to get this from the Role Environment.

// need to parse out deployment ID
string instanceId = Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CurrentRoleInstance.Id;
subscriptionName = instanceId.Substring(instanceId.IndexOf(‘.’)+1);SubscriptionDescription tmpSub = new SubscriptionDescription(topicName, subscriptionName);

Now is the point where we’ll add the in a filter using the Property that we set on the notification when we created it.

{
Filter tmpFilter = new SqlFilter(string.Format(“{0} = ‘{1}’”, MESSAGEPROPERTY_TYPE, msgType));
subscriptionClient.AddRule(SUBFILTER, tmpFilter);
}

I’m keeping it simple and using a SqlFilter using the property name we assigned when sending. So this subscription will only receive messages that match our filter criteria.

Now that all the setup is done, we’ll delete the subscription if it already exists (this gets rid of any messages and allows us to start clean) and create it new using the NameSpaceManager we instantiated in the class constructor. Then we start our async operation to retrieve messages:

asyncresult = subscriptionClient.BeginReceive(waittime, ReceiveDone, subscriptionClient);

Now in this, ReceiveDone is the callback method for the operation. This method is pretty straight forward. We make sure we’ve gotten a message (in case the operation simply timed out) and that we can get the payload. Then, using the delegate we set up earlier, And then we end by starting another async call to get another message.

if (result != null)
{
SubscriptionClient tmpClient = result.AsyncState as SubscriptionClient;    BrokeredMessage brokeredMessage = tmpClient.EndReceive(result);
//brokeredMessage.Complete(); // not really needed because your receive mode is ReceiveAndDeleteif (brokeredMessage != null)
{
NotificationMessage tmpMessage = brokeredMessage.GetBody<NotificationMessage>();

// do some type mapping here

recieverCallback(tmpMessage, tmpType);
}
}

// do recieve for next message
asyncresult = subscriptionClient.BeginReceive(ReceiveDone, subscriptionClient);

Now I’ve added two null checks in this method just to help out in case a receive operation fails. Even the, I won’t guarantee this works for all situations. In my tests, when I set the lifespan of a message to less than 5 seconds, still had some issues (sorting those out yet, but wanted to get this sample out).

Client side implementation

Whew! Lots of setup there. This is where our hard work pays off. We define a callback method we’re going to hand into our notification helper class using the delegate we defined. We’ll keep it super simple:

private bool NotificationRecieved(NotificationMessage message, NotificationMessageType type)
{
Console.WriteLine(“Recieved Notification”);    return true;
}

Now we need to instantiate our helper class and start the process of receiving messages. We can do this with a private variable to hold on our object and a couple lines into role’s OnStart.

tmpNotifier = new NotificationTopic(ServiceNamespace, IssuerName, IssuerKey, TopicName);
tmpNotifier.StartReceiving(new NotificationTopic.RecieverCallback(NotificationRecieved), NotificationMessageType.All);

Now if we want to clean things up, we can also add some code to the role’s OnStop.

try
{
if (tmpNotifier != null)
tmpNotifier.StopReceiving();
}
catch (Exception e)
{
Console.WriteLine(“Exception during OnStop: “ + e.ToString());
}base.OnStop();

And that’s all we need.

In Closing

So that’s it for our basic implementation. I’ve uploaded the demo for you to use at your own risk. You’ll need to update the WebRole, WorkerRole, and NotifierSample project with the information about your Service Bus namespace. To run the demo, you will want to set the cloud service project as the startup project, and launch it. Then right click on the NotifierSample project and start debugging on it as well.

While this demo may work fine for certain applications, there is definitely room for enhancement. We can tweak our message lifespan, wait timeouts, and even how many messages we retrieve at one time. And it’s also not the only way to accomplish this. But I think it’s a solid starting point if you need this kind of simple, self-contained notification service.

PS – As configured, this solution will require the ability to send outbound traffic on port 9354.

Azure Tools for Visual Studio 1.4 August Update–Year of Azure Week 5

Good evening folks. Its 8pm on Friday August 5th, 2011 (aka international beer day) as I write this. Last week’s update to my year of Azure series was weak, but this week’s will be even lighter. Just too much to do and not enough time I’m afraid.

As you can guess from the title of this update, I’d like to talk about the new 1.4 SDK update. Now I could go to great length about all the updates, but given that the Windows Azure team blog already did, and that Wade and Steve already covered it in this week’s cloud cover show. So instead, I’d like to focus on just one aspect of this update, the Azure Storage Analytics.

I can’t tell you all how thrilled I am. The best part of being a Microsoft MVP is all the great people you get to know. The second best part is getting to have an impact in the evolution of a product you’re passionate about. And while I hold no real illusion that anything I’ve said or done has led to the introduction of Azure Storage analytics, I can say its something I (and others) have specifically asked for.

I don’t have enough time this week to write up anything. Fortunately, Steve Marx has already put together the basics on how to interact with it. If that’s not enough, I recommend you go and check out the MSDN documentation on the new Storage Analytics API.

One thing I did run across while reading through the documentation tonight was that the special container that Analytics information gets written to, $Logs, has a 20TB limit. And that this limit is independent of he 100TB limit that is on rest of the storage account. This container is also subject to the being billed for data stored, and read/write actions. However, delete operations are a bit different. If you do it manually, its billable. But if its done as a result of the retention policies you set, its now.

So again, apologies for an extremely week update this week. But I’m going to try and ramp things up and take what Steve did and give you a nice code snippet that you can easily reuse. If possible, I’ll see if I can’t get that cranked out this weekend. Smile

Azure AppFabric Queues and Topics

The code used in this demo refers to a depricated version of the Topics API. Please refer to this post for an example that is compatible with the 1.6 or 1.7 Windows Azure SDK’s.

First some old business…. NEVER assume you know what a SequenceNumber property indicates. More on this later. Secondly, thanks to Clemens Vasters and his colleague Kartik. You are both gentlemen and scholars!

Ok… back to the topic at hand. A few weeks ago, I was making plans to be in Seattle and reached out to Wade Wegner (Azure Architect Evangelist) to see if he was going to be around so we could get together and talk shop. Well he lets me know that he’s taking some well deserved time off. Jokingly, I tell him I’d fill in for him on the cloud cover show. Two hours later I get an email in my inbox saying it was all set up and I need to pick a topic to demo and come up with news and the “tip of the week”.. that’ll teach me!

So here I was with just a couple weeks to prepare (and even less time as I had a vacation of my own already planned in the middle of all this). Wade and I both always had a soft spot for the oft maligned Azure AppFabric, so I figured I’d dive in and revisit an old topic, the re-introduction of AppFabric Queues and the new feature, Topics.

Service Bus? Azure Storage Queues? AppFabric Queues?

So the first question I ran into was trying to explain the differences between these three items. To try and be succinct about it… The Service Bus is good for low latency situations where you want dedicated connections or TCP/IP tunnels. Problem is that this doesn’t scale well, so we’ll need a disconnected messaging model and we have a simple, fairly lightweight model for this with Azure Storage Queues.

Now the new AppFabric Queues are more what I would classify as an enterprise level queue mechanism. Unlike Storage Queues, AppFabric Queues can be bound too with WCF as well as a RESTful API and .NET client library. There’s also a roadmap showing that we may get the old Routers back and some message transformation functionality. As if this wasn’t enough, AppFabric Queues are supposed to have real FIFO delivery (unlike Storage Queues “best attempt” FIFO) and full ACS integration.

Sounds like some strong differentiators to me.

So what about the cloud cover show?

So everything went ok, we recorded the show this morning. I was having some technical challenges with the demo I had wanted to do (look for my original goal in the next couple weeks). But I got a demo that worked (mostly), some good news items, and even a tip of the day. All in all things went well.

Anyways… It’s likely you’re here because you watched the show so here is a link to the code for the demo we showed on the show. Please take it and feel free to use it. Just keep in mind its only demo code and don’t just plug it onto a production solution.

Being on the show was a great experience and I’d love to be back again some day. The Channel 9 crew was great to work with and really made me feel at ease. Hopefully if you have seen the episode, you enjoyed it as well.

My tip of the day

So as I mentioned when kicking off this update, never assume you know what a SequenceNumber indicates. In preparing my demo, I burned A LOT of my time and bugged the heck out of Clemens and Kartrik. All this because I incorrectly assumed that the SequenceNumber property of the BrokeredMessages I was pulling from my subscriptions was based on the original topic. If I was dealing with a queue, this would have been the case. Instead, it is based on the subscription. This may not mean much at the moment, but I’m going to be put together a couple posts on Topics in the next couple weeks that will bring it into better context. So tune back in later when I build the demo I originally wanted to demo on Cloud Cover. Smile

PS – If you are ever asked to appear on an episode of the Cloud Cover show, it is recommended you not wear white or stripes. FYI…

Enter the Azure AppFabric Management Service

Before I dive into this update, I want to get a couple things out in the open. First, I’m an Azure AppFabric fan-boy. I see HUGE potential in this often overlooked silo of the Azure Platform. The PDC10 announcements re-enforced for me that Microsoft is really committed to making the Azure AppFabric the glue for helping enable and connect cloud solutions.

I’m betting many of you aren’t even aware of the existence of the Azure AppFabric Management Service. Up until PDC, there was little reason for anyone to look for it outside of those seeking a way to create new issuers that could connect to Azure Service Bus endpoints. These are usually the same people that noticed all the the Service Bus sample code uses the default “owner” that gets created when a new service namespace is created via the portal.

How the journey began

I’m preparing for an upcoming speaking engagement and wanted to do something more than just re-hash the same tired demos. I wanted to show how to setup new issuers so I asked on twitter one day about this and @rseroter responded that he had done it. He was also kind enough to quickly post a blog update with details. I sent him a couple follow-up questions and he pointed me to a bit of code I hadn’t noticed yet, the ACM Tool that comes as part of the Azure AppFabric SDK samples.

I spent a Saturday morning reverse engineering the ACM tool and using Fiddler to see what was going on. Finally, using a schema namespace I saw in Fiddler, I had a hit on an internet search and ran across the article "using the Azure AppFabric Management Service" on MSDN.

This was my Rosetta stone. It explained everything I was seeing with the ACM and also included some great tidbits about how authentication of requests works. It also put a post-PDC article from Will@MSFT into focus on how to manage the new Service Bus Connection Points. He was using the management service!

I could now begin to see how the Service Bus ecosystem was structured and the power that was just waiting here to be tapped into.

The Management Service

So, the Azure AppFabric Management API is a REST based API for managing your an AppFabric service namespace. When you go to the portal and create a new AppFabric service namepace, you’ll see a couple of lines that look like this:

image

Now if you’ve worked with the AppFabric before, you’re well aware of what the Registry URL is. But you likely haven’t worked much with the Management Endpoint and Management STS Endpoint. These are the endpoints that come into play with the AppFabric Management Service.

The STS Endpoint is pretty self-explanatory. It’s a proxy for Access Control for the management service. Any attempt to work with the management service will start with us giving an issuer name and key to this STS and getting at token back we can then pass along to the management service itself. There’s a good code snippet at the MSDN article, so I won’t dive into this much right now.

It’s the Management EndPoint itself that’s really my focus right now. This is the root namespace and there are several branches off of it that are each dedicated to a specific aspect of management:

Issuer – where our users (both simple users and x509 certs) are stored

Scope – the service namespace (URI) that issuers will be associated with

Token Policy – how long is a token good for, and signature key for ACS

Resources – new for connection point support

It’s the combination of these items that then controls which parties can connect to a service bus endpoint and what operations they can perform. It’s our ability to properly leverage this that will allow us to do useful real work things like setup sub-regions of the root namespace and assign specific rights for that sub-region to users. Maybe even do things like assign management at that level so various departments within your organization can each manage their own area of the service bus. J

In a nutshell, we can define an issuer, associate it with a scope (namespace path) which then also defines the rules for that issuer (Listen, Send, Manage). Using the management service, we can add/update/delete items from each of these areas (subject to restrictions).

How it works

Ok, this is the part where I’d normally post some really cool code snippets. Unfortunately, I spent most of a cold, icy Minnesota Sunday trying to get things working. ANYTHING working. Unfortunately I struck out.

But I’m not giving up yet. I batched up a few questions and sent them to some folks I’m hoping can find me answers. Meanwhile, I’m going to keep at it. There’s some significant stuff here and if there’s a mystery as big as what I’m doing wrong, it’s that I’m not entirely sure why we haven’t heard more about the Management Service yet.

So please stay tuned…

What’s next?

After a fairly unproductive weekend of playing with the Azure AppFabric Management Service, I have mixed emotions. I’m excited by the potential I see here, but at the same time, it still seems like there’s much work yet to be done. And who knows, perhaps this post and others I want to write may play a part in that work.

In the interim, you get this fairly short and theoretical update on the Azure AppFabric Management Service. But this won’t be the end of it. I’m still a huge Azure AppFabric fan-boy. I will let not let a single bad day beat me. I will get this figured out and bring it to the masses. I’m still working on my upcoming presentation and I’m confident my difficulties will be sorted out by then.

Azure AppFabric – A bridge going anywhere

Yeah, I know. I said my next posts were going to be more articles about the Windows Azure Diagnostics. Course I also thought I’d have those up a week after the first. Life has just been too hectic of late I guess. I’d love to admit I’m back of my own free will but I feel I have a debt that needs to be paid.

So I’ve spent the last two months working this Azure POC project. Nearly everything was done except for one important point, connecting a hosted application to an on-premise SQL Server. The best practice recommendations I’d been seeing for the last year on the MSDN forums always said to expose your data tier via a series of web services that your hosted application could then consume. You could secure those services however you saw fit and thus help ensure that your internal database remained secure.

Problem is if my application is already coded for a direct database connection and it would require alot of rework for it to now use services. In my case, I didn’t have the time and I wasn’t very excited about the degree of risk reworking all that code was going to introduce to my project. So I wanted something that I could implement that required only a configuration string change. Alexsey, the MSFT architect that was providing guidance on this project suggested I check out a blog post by Clemens Vasters.

That article was about using the Azure AppFabric to create a port bridge between two processes. Many of you are likely familiar with the concept of port tunneling. Heck, most of us have likely used this technique to bypass firewalls and web proxies for both legitimate and more dubious reasons. This process allows you to route traffic that would normally go through one port to a different port on another machine that in turn reroutes the connection back to the original port we wanted. This differs from an Azure AppFabric port bridge in that like other AppFabric connections, both ends of our bridge make outbound connections to the AppFabric’s relay to establish the connection.

Going out the in door

Its this outbound connection that is so powerful. Those security guys we all love to hate and slander pay close attention to what they allow to come into their networks. But they’re usually pretty forgiving about outbound connections. They assume if we’re permitted on their network, we should be allowed a bit of leeway to reach out. Even in more hardened networks, its easier to get a request to allow an out-bound connection approved then it is a request the opening of a new in-bound connection.

Its this out-bound connection that will be our to what lies behind the firewall. You see, we’re just interested in establishing a connection. Once that’s done, traffic can go bi-directionally through it. Heck, we can even multi-plex the connections that multiple requests can operate simultaneously on the same connections.

How it works

Clemens has done a great job in his blog post and its sample code of getting much of the work out of our way. You’ve got a packaged service host that will run either as a service or a console application. You’ve also got a client application (and there’s a Windows Azure worker role out there that will be posted soon as well I hope). Both of these build on some almost magical connection management logic build on top of a nettcprelaybinding. But here’s Clemens’ picture of it:

image

Nice picture but it needs some explanation. The bridge agent opens a specified port (in the case of this diagram, 1433 for SQL Server) and monitors for connections to that port. Each connection gets accepted and its contents handed off to the Azure AppFabric. On the other end we have the Port Bridge Service Host. This host registered our AppFabric service bus endpoint and also looks for any inbound messages. When these messages are received, it will spin up a connection to the requested resource inside the firewall (at its lowest level this is just an instance of a tcpClient object) and pass the data to it. Return messages come back in just the same manner.

Now this is where the magic starts to kick in. You’d think that with all these moving parts that the connection would be pretty slow. And admittedly, there’s a bit of overhead in getting the connection established. But like any tcp based connection, once the “pipe” has been established, the overhead of moving information through all this isn’t much greater then just routing through multiple firewalls/routers. In fact, you can look at our agents and the bus as just software based routers. :D

Back to the problem at hand, my SQL connection

Ok, back to the task at hand. I’ve got my Windows Azure service that needs to connect to an on-premise SQL Server. First, I need an agent.

We start by standing up a worker role that has an endpoint configured for port 1433. This role will contain our agent code. If you look at Clemens’ sample, you can see its not that complicated to strip out what you need and put it into a worker role. I also know there’s a sample worker role that’s already been created by one of the folks at MSFT that hopefully they will let get posted soon.

With our agent role in place, the next step is to modify the connection string of my application. The only part we care about is the server portion of the connection string. We need to take something that looks like Server=mysqlserver and instead make it look more like Server=tcp:<servicename>.cloudapp.net,1433. This will route the application’s SQL connection to our worker role (and yes, this means its important that you have secured your worker role so that it doesn’t allow just anyone to connect to it). Course, I’d also recommend that you use a non-standard port. This will further help obscure things. If you do, make sure that the endpoint and the port on your connection string match.

I also need to point out that there’s some magic the Windows Azure plays with our port numbers, So in the code for your agent role, don’t have it just monitor the endpoint port. The reason for this is that the Windows Azure load balancers map the configured endpoint to a different port that’s assigned to each instance. This is how they are able to route traffic between multiple instances of the same application. So when your agent role starts up, have it use the RoleEnvironment to get the appropriate endpoint port(s).

On the service host side of things, we need to configure our service by indicating what server we’re connecting too and what port to use when talking to it. Now this is where things can get dicey. Its exceedingly important that the client’s “Port Bridge Target” and the service host’s “targetHost” values match. More importantly, these values should be either the server name or IP address of the box that represents the final endpoint of our bridge. The reason its so critical that these values match is that this is also the name that will be assigned to our AppFabric service endpoint. And if they don’t match, of course things won’t match up.

Not very deep, but there it is

I wish I had more time to show you some demonstration code and I’ll add that to my list for another day. But today I’ve walked you through the basics of what we accomplished and hopefully shared some of my excitement about this with you. SQL Server is just one practical application of this approach. You could bridge simple web service connections, smtp connection… basically anything that is TCP/IP based could get connected using this same approach.

There’s just one closing piece of business. I’m a firm believer in giving credit where it is due. And as I said at the top of this post I have a debt that needs to be paid. In this case, Alexsey Savateyev and Wade Wegner of MSFT. Both these gentlemen put up with my many stupid questions and blank stares as I tried to get this working. But now that it does, you can guarantee I’ll be using it again and again. Alexsey drew the short straw and was assigned to help me on my project. But Wade, bless him, saw me struggling on twitter and stepped up to help me out a well. Both these folks have my highest praise for their patience and technical know-how.

And of course, the next logical step is porting the service host to Windows Azure…

.NET Service Bus (Part 2 continued) – Hands on Queues

Back again in rapid succession and feeing very strong. Over the weekend I smoked a bunch of turkey (most yummy) and also finally made my first weight loss goal. How did I decide to celebrate these accomplishments you ask…. I decided that I’d jump right back in with another blog post about .NET Service Bus queues. Yes, I’m a geek.

If you read my last post, you may recall that I said that .NSB queues were more robust and I pointed out a few key differences. Well, I was understanding things just a bit. They are actually significantly more robust and also require a few different techniques when dealing with them. In this update I will explore those differences in a bit more detail as well as cover the basics of sending and receiving messages via .NET Service Bus queues.

Queue Policy (yeah for governance)

Azure Storage queues are pretty basic. As far as I can tell, they were created to be a simple, lightweight option for doing inter-process communication. As such, they were kept pretty simple, the single-scoop vanilla ice cream of queue storage mechanism. Well the .NSB Queues are closer to what I think of as traditional queues. Each queue has a policy that dictates how the queue will behave. Here are some of the queue policy settings:

Property Type Description
Authorization AuthorizationPolicy Determines if authorization is required to send and/or receive messages to a queue.
Discoverability DiscoverabilityPolicy Is access public, or can it only be discovered by managers (default), listeners, or senders (based on Access Control Service token)
ExpirationInstance DateTime When the queue will expire. If not periodically renewed, the queue and all its messages will be deleted (unrecoverable).
MaxMessageAge TimeSpan Messages that have been in the queue in excess of the TimeSpan value are dropped. Default is 10 minutes, maximum is 7 days.
Overflow OverflowPolicy How to handle messages when the queue is full.
PoisonMessageDrop EndpointAddress Where do poison (unreadable) messages get sent?
TransportProtection TransportProtectionPolicy sets if a secure connection is required to send to or receive from this queue

There are other properties (some of which appear to be for future use), but these are the ones I wanted to call to your attention today.

Of key interest are MaxMessageAge and PoisonMessageDrop. When you put a message into an Azure Storage queue, it stays there until either the message or the queue is removed. .NSB queues will automatically purge messages that exceed the specified TimeSpan value. And as anyone that has worked with queues will tell you, occasionally messages get “stuck” and can’t be read. So its nice to be able to using PoisonMessageDrop to specify a SOAP endpoint to which these messages can be directed.

Yeah, but how do I send/receive?

Ok, enough blabbering, this is supposed to be a hands-on blog. How about we get to some hands-on already. Any direct interaction with the queue is done via the QueueClient object. So we need to get an instance of that object for our queue before we start doing any real work. To do that we need the QueueManagementClient again.

In my last post we started by crafting our URI and creating TransportClientEndpointBehavior based credentials. We’ll need those same items this time around, but instead of calling the QueueManagementClient’s CreateQueue method with a QueuePolicy object, we’re instead going to call the GetQueue method. Here’s our code snippit:

            // create queue URI
            Uri tmpURI = ServiceBusEnvironment.CreateServiceUri("sb", SolutionName, "/" + textBox1.Text.Trim());

            // authenticate against the queue manager
            TransportClientEndpointBehavior tmpCredentials = new TransportClientEndpointBehavior();
            tmpCredentials.CredentialType = TransportClientCredentialType.UserNamePassword;
            tmpCredentials.Credentials.UserName.UserName = SolutionName;
            tmpCredentials.Credentials.UserName.Password = Password;

            QueueClient tmpQueue = QueueManagementClient.GetQueue(tmpCredentials, tmpURI);

The result is an instance of QueueClient called tmpQueue that we can now use to send/receive messages in our queue.

Sending and Receiving Messages (finally)

Alright, so now we’re ready to start sending and receiving messages. Doing this is pretty simple.

Here’s an example of a send…

            System.ServiceModel.Channels.Message tmpMessage =
                System.ServiceModel.Channels.Message.CreateMessage(MessageVersion.Default,"urn:testsend", "my message");
            tmpQueue.Send(tmpMessage, TimeSpan.FromSeconds(30));

Simple enough. Since I’m using a Windows Forms application as my host, I need to be careful with ambiguous references to the Message class (the Windows.Forms namespace also has such a class). I can also change the MessageVersion to “Default”. But I did some testing and Soap12WSAddressing10 works as well. So depending on your specific scenario, it could get even simpler.

Next up is reading a message…

            System.ServiceModel.Channels.Message tmpReadMsg = tmpQueue.Retrieve();
            MessageBox.Show(tmpReadMsg.GetBody<string>());

Now this is where some additional differences between Azure Storage and .NSB queues start to become apparent. The Retrieve method automatically removes the message from the queue. This differs from Azure Storage where a second call is required to remove it. Additionally, unlike Azure Storage, if we want to retrieve multiple messages, we need to use a different method (RetrieveMulitiple) to get them. However, unlike Azure Storage, the retrieval of multiple items does not appear to be limited to a maximum of 25 messages (can’t find confirmation on this though, so I’ll try to do some testing).

But… but… what about peeking?

Ok, so we want to see what’s in a queue without actually removing it. Simple enough, we just switch to the PeekLock and PeekLockMultiple methods. Notice the “Lock” adjective in both those methods names. Yes, they actually operate more like way messages are read from Azure Storage. After a PeekLock (or PeekLockMultiple), a separate call is needed to DeleteLockedMessage or ReleaseLock methods using the retrieved (or peeked) Message object. If this is not done, the peeked message will be automatically unlocked after 60 seconds. Unfortunately, unlike Azure Storage queues, there doesn’t appear to be a way to vary the timeout period.

Confused yet?

If you’ve worked with Azure Storage queues, you’re likely feeling a bit confused right now. While Azure Storage and .NSB queues both serve a similar purposes, the actual implementations are radically different. .NSB queues offer policy enforcements, access control, and router integration. Azure Storage queues have greater persistence and flexibility in retrieval options. So you’ll need to weigh which option is best suited to your needs.

Next time, we’re going to be kicking things up yet another notch as I start into .NSB routers. Things will get really fun as we start combining queues and routers together to handle how messages get processed.

.NET Service Bus (Part2) – Hands on with Queues

Its been an unforgivably long time since I wrote Part 1 of my series of hands articles on with the .NET Service Bus. I’ve tried to convince myself it was because I was confused by the WCF nature of the service relay. I also told myself to shelve the series while I started working on my “App Fabric” architecture model. Well, yesterday I realized I was just making things too hard. The service relay is just one piece of the .NET Service Bus which in turn is just one component of .NET Services. And while the service relay is an great piece, I was trying to do more with it when I needed too.

So with this epiphany in hand, I’m turning my attention to the another feature of the .NET Service Bus, queues.

I spent much of my playtime with Windows Azure messing with Azure Storage’s queue mechanism. IIMHO, .NSB’s queues are a little more robust and I’m happy about that. It also compliments the relay service nicely. The relay service is great when both the host and subscriber and online. But if either one could be offline, a queue is a great way to relay messages. There are also a series of important differences with how messages are pulled from .NSB queues. However, we’re gonna cover that in my next post.

NOTE: This article presumes you already have a .NET Services solution created and know the solution name and password.

What’s the same and what’s different

As I just mentioned, .NSB’s queues are a little more robust. But it would be helpful to put this in a bit of context by comparing the two.

Both are based on RESTful services that require authorization for access. And with both you can create, delete, and list queues as well as put and retrieve message from them. The .NSB version of Queues however has a fully supported API for .NET access (the StorageClient api for Azure storage is only a sample right now). You can define policies for .NSB queues and control access a bit more granularly. Finally, we have the ability to monitor for queue exceptions (possibly caused by policy violations).

As this weren’t enough, we can also hook these queues up to another .NSB feature, routers. But more on that another day. :)

Creating our first queue

So before we can start working with our queues, we need to actually create them. The process of creating a queue is pretty simple. You’ll define its policy and using that policy, create the queue. So lets get started with our hands on..

Fire up Visual Studio and create a windows form application, drop a text box and button onto the form, and start a “Click” event handler for the button. Next, we need to add references for .NET Service Bus SDK assembly (Microsoft.ServiceBus), System.ServiceModel, and System.RunTime.Serialization. Lastly, put a using clause for Microsoft.ServiceBus into the code-behind to make things easier later on.

We’re going to do the heavy lifting inside our event handler. Create two string values that will store our solution name and password for use in the code (I know, not a best practice, but this is just a demo *shrug*). The remaining work is split into four steps: create our URI, create our authentication credentials, define the queue policy, create the queue.

Define the URI

For creating our URI, we’re going to use ServiceBusEnvironment.CreateServiceUri. This is a solid best practice for working with the service bus.

    Uri tmpURI = ServiceBusEnvironment.CreateServiceUri("sb", SolutionName, "/" + textBox1.Text.Trim());

Notice that we’re using our queue name (from the text box) as the path of our URI. We’ve also specified the protocol as “sb” because we’re addressing the service bus itself, not the queue.

Papers Please (Authentication Credentials)

Our URI specified “sb” as the protocol because we’re going to talk to the service bus. As such, we need to present our credentials. We’ll set up a simple username/password set using the solution credentials.

    TransportClientEndpointBehavior tmpCredentials = new TransportClientEndpointBehavior();
    tmpCredentials.CredentialType = TransportClientCredentialType.UserNamePassword;
    tmpCredentials.Credentials.UserName.UserName = SolutionName;
    tmpCredentials.Credentials.UserName.Password = Password;

Nothing new hear really. Its not all that different then when we were relaying messages through the bus via WCF. The SolutioName and Password values are the strings we created earlier and should be valued appropriately for our .NSB solution.

Queue Policy (this is where things get interesting)

This is were we really get to see a key difference between Azure Storage queues and .NSB queues. To create a .NSB queue, we must define a policy that indicates how the queue will behave. We do this using the QueuePolicy class and its properties. The MSDN documentation for this class lists them all, so for the moment, I’ll just discuss the ones we’re using in my example.

    QueuePolicy tmpPolicy = new QueuePolicy();
    tmpPolicy.Discoverability = DiscoverabilityPolicy.Public;
    tmpPolicy.Authorization = AuthorizationPolicy.NotRequired;
    tmpPolicy.ExpirationInstant = DateTime.Now.AddDays(7);

Discoverability lets us control if this queue will be publically visible. Since I’ve seen this to true, anyone going to my solution’s URI will be able to see the ATOM of my queues. Authorization sets the type of credentials that are needed to access the service. And finally, ExpirationInstant determines when our queues resources can be reclaimed. This last time is an important distinction between Azure Storage and .NSB queues. With Azure Storage, once a queue is created, you need to explicitly remove it. With .NSB, we need to periodically update/renew this value if we want the queue’s existence to continue.

Now, I don’t know why this is the case, I would surmise it is because unlike with Azure Storage, you aren’t paying for the resources required to persist your queue. Given this, its reasonable that steps were taken to allow the .NET Service Bus to reclaim unused resources.

Create the queue. It Lives!

The final step is to create the queue. This is super simple…

    QueueManagementClient.CreateQueue(tmpCredentials, tmpURI, tmpPolicy);

QueueManagementClient is what handles the interaction with the .NSB queue manager. We’re only calling one of several static methods contained in this class. We can also delete a queue, get an instance of QueueClient associated with a specific queue, renew a queue, or get just a queues policy. The only item lacking here that I’d like to see is a way to get a list of all the queues as we could with Azure storage. However, since we can crawl the ATOM feed for our solution and find any public queues, its not something I’m going to lose sleep over.

*ponders an idea for another blog post*

Wrapping up

That’s all I wanted to cover for today (I’m getting tired of my own SUPER long blog posts). Thanks for stopping by. Next time I’m going to dig into the QueueClient and working with the queue contents themselves.

.NET Service Bus (Part 1) – Hands On with Relays

I’m on the road this week and had expected to get more accomplished then I did sitting in the office. I started drafting this blog post at 5:30am last Sunday but opted to start over completely and this is the end result. The upside to all this lost time is that I have worked out the next 3-4 articles I want to post. And if all goes well I should be able to throw them all out in rapid succession. :)  Lucky you.

Today we’re going to start diving into the first of three features of .NET Services, the Service Bus. I’m running the July CTP bits on a Windows 7 VPC with Visual Studio 2008. This article series has taken longer to come together than I would have liked because just like when I was working with Azure storage and REST calls, I’m a bit of a noob when it comes to WCF. And while WCF isn’t required for working with the Service Bus, its the easiest way to work with it when using .NET (there are Java, PHP, and Ruby SDK’s either already ready or in the works).

An Internet Service Bus?

Go ahead and snicker. An IBM centric cohort of mine did as well. Yes, this is a simple Service Bus brought to an internet scope. It performs the basic function of allowing you to relay messages between processes, including some queuing and routing dynamics. It even includes some integrated access control functionality. Where it really deviates from the norm is that this service bus was created to operate in the cloud. This has required that the implementation be designed in such a way as to be friendly when dealing with issues like firewalls, NAT’s, and proxy addresses. You know. all those little annoying things that have been created to help keep our internet exposed applications safe.

The rules under which this is accomplished are actually fairly straight forward. First and foremost, all application connections are outbound to the service bus. The bus doesn’t need to know the address of any service you’re hosting because that service will reach out and connect to the .NSB (.Net Service Bus, I’m gonna be lazy from here on out). It acts like its own DMZ, with all the interested parties needing to step up to the table if they want to talk. Secondly, all communications will be done over a limited number of ports. For TCP based relays, you will need 808, 818, 819, 828. For http connections, good old port 80 will be sufficient (be sure to refer to the official docs for the latest updates on required ports).

New WCF Bindings

As I mentioned earlier, when doing .NET development, the ideal way to interact with the .NSB is through WCF. To that end, a new series of bindings and binding elements have been created to account for this. For the most part, these new bindings and elements are simply counterparts to traditional WCF bindings. Here they are:

Standard WCF Binding Equivalent Relay Binding Relay Transport Element
BasicHttpBinding BasicHttpRelayBinding Http(s)RelayTransportBindingElement
WebHttpBinding WebHttpRelayBinding Http(s)RelayTransportBindingElement
WSHttpBinding WSHttpRelayBinding Http(s)RelayTransportBindingElement
WS2007HttpBinding WS2007HttpRelayBinding Http(s)RelayTransportBindingElement
WSHttpContextBinding WSHttpRelayContextBinding Http(s)RelayTransportBindingElement
WS2007HttpFederationBinding WS2007HttpRelayFederationBinding Http(s)RelayTransportBindingElement
NetTcpBinding netTcpRelayBinding TcpRelayTransportBindingElement
NetTcpContextBinding netTcpRelayContextBinding TcpRelayTransportBindingElement
N/A netOnewayRelayBinding OnewayRelayTransportBindingElement
N/A netEventRelayBinding OnewayRelayTransportBindingElement

 

On an only semi-related note, this blog will likely on a bit of a hiatus this fall as I get back to working on some longer overdue certs. This includes trying to get certified with WCF, so I won’t even try and talk like an expert in the subject right now. Suffice to say that each of these new bindings are useful for for specific circumstances. And knowing when and how to use which one will be your challenge.

Creating our Service Bus Project

Before we dive into creating our solution, we need to make sure our hosted service bus is setup. Head over to the .NET Service portal at portal.ex.azure.microsoft.com and sign in. If you don’t already have a solution, click on “Add Solution” to create it. When giving it a name, be careful as you won’t be able to delete or rename you solution (at least not currently).

hss_servicesportal

You’ll notice that for each of the two solutions displayed above, we have access to both the .NET Services Access Control Services, as well as the Service Bus. We’ll need a couple pieces of data from here for our initial sample project. The Solution name, our password, and the endpoint URI. The URI can be viewed by clicking on the “Service Bus Registry” link.

A single instance of the service bus can easily host multiple relays. To take advantage of this, we are going to put “SimpleService” on the end of your URI to target things a bit more narrowly. And since we’re using the .NSB, we need to change our URI protocol from tcp or http to “sb”. The final URI looks like this:

sb://<solutionname>servicebus.windows.net/SimpleService/

Our first relay

So lets start with a simple example. I originally wanted to do this demo using Azure Hosted applications, but decided that I’ll save that for its own post at a later date. So for this first example, we’re going to use two Windows Console applications. Our simple service will expose a single “GetDate” method which will in turn be consumed by a second console application.

Fire up Visual Studio and start by creating a Windows Console application. I called mine “SimpleService”. We then add a second Windows Console application (“SimpleClient”), as well as a class library (“SimpleServiceDefs”). The SimpleService app will host our service using an outbound tcp binding, connect to the Service Bus Relay. Simple client will then connect to the bus as well to consume our service.

Our Service Contract

Next up, we need to add classes to the SimpleServiceDefs class library that define the contract for our WCF Service. I’ve opted to put it in its own library (I’m a firm believer in the use of class libraries as a best practice), but you could just as easily put it in either of the other two projects. First, add a reference to System.ServiceModel. We need the System.ServiceModel reference because the classes that define our contract will be decorated with a few attributes tags to help define it. I then add an interface, iSimpleService, and the class SimpleService. The SimpleService has only one method, GetDate, which accepts no parameters and returns a datetime.

The iSimpleService interface

     using System;
     using System.Collections.Generic;
     using System.Linq;
     using System.Text;
     using System.ServiceModel;

     namespace SimpleServiceDefs
     {
         [ServiceContract]
         public interface iSimpleService
         {
             [OperationContract]
             DateTime GetDate();
         }
     }

 

The SimpleService class

     using System;
     using System.Collections.Generic;
     using System.Linq;
     using System.Text;

     namespace SimpleServiceDefs
     {
         public class SimpleService : iSimpleService
         {
             public DateTime GetDate()
             {
                 Console.WriteLine("GetDate called");
                 return DateTime.Now;
             }
         }
     }

And there we have our service contract. Next we have to create a host for it.

The Service Host

Our SimpleService project will be the host of our service. We’ll start by adding in a reference for System.ServiceModel and a new assembly, Microsoft.ServiceBus. This second reference is for a component of the .NET Service SDK. It contains the definitions for all the new bindings and endpoints that we mentioned above. We’re then going to add an application configuration file.

Lets start with the configuration file. We’ll use this to define the properties of our WCF hosted service. Here’s my configuration…

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="SimpleServiceDefs.SimpleService">
        <endpoint address="sb://[solution].servicebus.windows.net/SimpleService/"
                  binding="netTcpRelayBinding"
                  contract="SimpleServiceDefs.SimpleService"
                  behaviorConfiguration="default" />
      </service>
    </services>
    <behaviors>
      <endpointBehaviors>
        <behavior name="default">
          <transportClientEndpointBehavior credentialType="UserNamePassword">
            <clientCredentials>
              <userNamePassword userName="[solution]"
                                password="[password]" />
            </clientCredentials>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

Start by substituting the “solution” and “password” values. These values come from the service bus project we created above. Next up, look at the binding I used, netTCPRelayBinding. This is one of the new service bus relay bindings. This particular one is simply a service bus specific version of netTCPBinding. You’ll also notice that the transportClientEndpointBehavior node remains underlined. This is normal, although I’m hoping that it will eventually be fixed. For now its just an annoyance.

Now all that’s left is the code within our service host application. Start by adding using clauses for System.ServiceModel and SimpleServiceDefs. Then insert the following code.

        static void Main(string[] args)
        {
            ServiceHost myHost = new ServiceHost(typeof(SimpleServiceDefs.SimpleService));

            myHost.Open();

            Console.WriteLine("Press Enter to exit...");
            Console.ReadLine();

            myHost.Close();
        }

If we’ve done everything correctly, the console application should start up and display the “press enter” message. However, we have no way to know if we’re actually connected to the service bus. Lets add two more “using” clauses, Microsoft.ServiceBus and System.ServiceModel.Description. Next, between the declaration of the service host and its opening, we need to insert a few extra lines.

            // set endpoint as publically discoverable
            ServiceRegistrySettings settings = new ServiceRegistrySettings();
            settings.DiscoveryMode = DiscoveryType.Public;

            // set all host endpoints as publically visible
            foreach (ServiceEndpoint se in myHost.Description.Endpoints)
                se.Behaviors.Add(settings);

This code sets the service so its discoverable. You can see it by going to the ATOM feed for our service. Here’s mine…

image

No, I’m not afraid of sharing this. Because without my password, you still can’t use the service. I just wanted to touch on this for now. So we’ll leave a more in-depth discussion on security and the Access Control Service for another post.

The Service Client

With the service bus host built and running, we’re ready to build a client to consume our service. Add the same references to the ServiceClient project that we added to SimpleService: SimpleServiceDefs, ServiceModel and ServiceBus. We’ll also add an app.config file and copy the contents of the app.config used for our service to the client. We’re gonna make one minor tweak to the config by adding a name attribute to the endpoint node as shown below.

      <endpoint address="sb://[solution].servicebus.windows.net/SimpleService/"
                binding="netTcpRelayBinding"
                contract="SimpleServiceDefs.iSimpleService"
                name="RelayEndpoint" 
                behaviorConfiguration="default" />

Now we dive into the code of our client. We’ll add using clauses for ServiceModel and SimpleServiceDefs, then paste the following into the ‘Main’ method.

            ChannelFactory<iSimpleService> channelFactory =
                new ChannelFactory<iSimpleService>("RelayEndpoint");
            iSimpleService channel = channelFactory.CreateChannel();

            DateTime response = channel.GetDate();

            Console.WriteLine("Current Date is {0}", response.ToShortDateString());
            Console.WriteLine("Press enter to end program.");
            Console.ReadLine();

            channelFactory.Close();

In this code, we declare our WCF channel, call the remote service (channel.GetDate), and then display the result on the screen. While my method returns a date, you could just as easily return a string, or pass some parameters and operate on them. But lets test it and make sure it works.

Start by running the service host application manually. I find it easiest to just right-click on the project in the solution explorer and click the “Open Folder in Windows Explorer” option. Once you get its “press enter” message, we’re ready to run the client. Make the ServiceClient application our start project and then start debugging it. If all goes well, you could end up with two console windows like these…

SimpleService_Console   SimpleService_Client

On the next episode of “As the Cloud Turns”…

Before I close things out for this post, I want to thank Aaron Skonnard of Pluralsight for his great whitepapers, the folks on the MSDN forums, and all the tweeps on twitter. They’ve all helped me get a hand on the .NSB and WCF. Its a great community and I hope I’m at least giving back as much as I’m getting.

That’s all I have for today, but not all I have to share with you. In the coming weeks we’ll cover topics such as security, queues and routers, and hosting our services in Windows Azure worker roles. Once those topics have been covered, we’ll move on to the Access Control service, and once its available again, even hosted Workflows.

Till then, thanks for reading and be sure to drop me a comment to let me know if there’s anything related to this topic you’d like to see. TTFN!

.NET Services – July CTP Update

The .NET Services team has updated their blog with details on the July CTP update. Aside from an outtage on July 7th, here are some highlights:
  • Queues/Routers are being wiped, if you want data saved you will need to work to persist it yourself.
  • Workflow is being removed (it will reportedly return when .NET 4.0 is released)

As a result, I will likely hold off on publishing any further details on .NET Services until after this maintenance is concluded. I could use a bit more time to work with it. :)

 

.NET Services – Introduction to the Service Bus

Darned if this post hasn’t been rough to write. I don’t know if its my continued lack of caffeine (quit it about 10 days ago now), or the constant interruptions. At least the interruptions have been meaningful. But after 2 days of off and on again effort, this post is finally done.

As some of you reading this may already been aware, I’ve spent much of my spare time the last several weeks diving into Microsoft’s .NET Services. I’m finally ready to start sharing what I’ve learned in what I hope is a much more easily digestible format. Nothing against all the official documents and videos that are out there. They’re all excellent information. The problem is that there’s simply too much of it. :)

Overview of .NET Services

Lets start with this diagram from Microsoft

servicesPlatform

As you can see, Microsoft’s Azure Services is comprised of various components. I’ve already covered Windows Azure, the Platform as a Service (PaaS) component in other posts. However, a key point in Microsoft’s cloud messages is the notion of Software + Services or S+S. If Windows Azure or any of the products listed across the top are the software, then the blocks in the middle represent the services. I’ve spent the last 2 weeks doing my best to get my head wrapped around .NET Services. The work will continue for many weeks to come (I don’t have the luxury of being able to devote myself to it full time), but I’m looking forward to uncovering even more of this great component of Microsoft’s Azure Services.

.NET Services currently consists of three different features:

Service Bus – We’ve all heard of the notion of an Enterprise Service Bus (ESB). Well this is an Internet Service Bus. It provides a way to enable communication between processes even with firewalls, NAT, etc… protecting our intranet resources from the perils of the cloud.
.NET Access Control Service – Of course what good would being able to access things across the bus be if there wasn’t a way to help secure them. This service can handle authentication and then provide back the user’s claims.

.NET Workflow Service – Workflows are great. But for the cloud we need to ensure that they are scalable and we’d also like to be able to integrate them easily with our Service Bus.

The Service Bus – in 60 seconds or less (ok, 90. I read slow).

The purpose of the Service Bus is to give developers a way to easily overcome the challenges of working across internet security barriers such as firewalls and NAT. Thus allowing them to quickly implement secure messaging between processes regardless of where those processes reside (on-premise or in the cloud). This is the feature I’ve spent the bulk of my time looking into so far. And there’s a ton here. Course is doesn’t help that I really hadn’t done anything with WCF prior to learning about the Service Bus.

Because its a service bus, it has a service registry to support discovery and can handle access control and routing. However, the service bus is not just a single thing. Its actually comprised on a three distinct pieces. There’s the relay, used to move messages between running processes. Next, there’s routers, used strangely enough to route messages. And lastly (and most recently added), there are queues which provide an asynchronous, persistent method of delivery messages between processes (similar but more robust then the queues in Azure Storage).

Of course, none of these operate on their own. Behind the scenes, the Service Bus works with the Access Control Service and SQL Data Services. The ACS handles authentication and publication of the claims, while SDS provides persistence.

Controlling Access

All access to endpoints exposed via the bus, regardless of their type (relay, router, or queue) must be secured. The Service Bus already recognizes the Access Control Service as a trusted authority for claims based authentication. In turn, the ACS trusts Windows Live ID. Future releases are supposed to support AD Federation Services (Codename: Geneva) and private Active Directories. It also has its own built-in identify provider that does simple userid/password authentication. This built-in provider is supposed to go away eventually. However, I can easily see if being persisted into commercial release and beyond. I also suspect we’ll see support for non-Microsoft solutions like Tivoli in future releases/update.

The ACS uses claims based authorization. You present some credentials, they are authenticated, and then a list of claims is digitally signed and returned to the caller. These claims are then used by such items as the Service Bus to determine what a user can access.

The ACS is a big topic that I’m not prepared to get into just yet. However, I will say that you can interact with the ACS via WCF, REST, or HTTP. There’s also supposed to be an ATOM interface, but no details on it have been published as yet. I’ll get into this area in more depth in future posts, once I’ve figured it out a bit more. :)

Workflow for the Cloud

And the final component is the ability to create your own .NET Workflow and deploy it into the cloud. The advantage here is that these workflows, already in the cloud can then be used to help coordinate the interaction of services wherever they might live. The other advantage is that because they reside within Microsoft Azure, they have the scalability needed for a robust cloud implementation. If they’re not needed, they aren’t consuming resources. If 100 instances are needed, they are spun up to meet demand.

As solutions demand more and more services, not all directly under your control, coordinating them will be increasing important. The .NET Workflow service will help meet that need. It will leverage its cousins, the ACS and Service Bus to assist with those duties.

I haven’t really dug into this item at all yet. But I’m keenly interested in it. So you’ll be seeing more of it in the hopefully not too distance future.

Update: Workflow will be going offline in July 2009 as the team prepares for their next milestone (after .NET 4.0 ships). For details, go to http://blogs.msdn.com/netservicesannounce/archive/2009/06/12/upcoming-important-changes-to-net-workflow-service.aspx

*ding* Your brain is full

I wish I could share with you exactly how much my brain starts swimming when I begin to ponder what could be accomplished with these services. If Windows Azure allows you to put applications into the cloud, these services are the glue that can help bind applications into cohesive business solutions. I’m really looking forward learning more about the various services via some hands-on practice and exploration. And you can be certain I’ll be sharing this with you ever step along the way.

BTW, it took this blog over 4 months to go from start to 500 pages viewed. In just over a month this has doubled to over 1000. Thanks to everyone that’s been visiting and sharing this blog. A special thanks to Roger Jennings of Oakleaf Systems. I continue to be impressed by Roger’s work and I’m flattered that he finds my posts valuable enough to continue to link to. He’s directly responsible for much of the traffic I see and I can’t express how much I appreciate his support.

As always, if there’s something specific you’d like to hear more on, just let me know.

Follow

Get every new post delivered to your Inbox.

Join 1,076 other followers