.NET Services – July CTP Update

The .NET Services team has updated their blog with details on the July CTP update. Aside from an outtage on July 7th, here are some highlights:
  • Queues/Routers are being wiped, if you want data saved you will need to work to persist it yourself.
  • Workflow is being removed (it will reportedly return when .NET 4.0 is released)

As a result, I will likely hold off on publishing any further details on .NET Services until after this maintenance is concluded. I could use a bit more time to work with it. :)

 

Cloud Computing vs. Virtualization

Twice this past week I had someone ask me to explain the differences between virtualization and cloud computing. I answered them both in overly drawn-out ways. But kept pondering the question in hopes of coming up with a more succinct answer. Here it goes…

Cloud Computing is a mechanism, an approach, for the delivery of services. Virtualization is one possible service that could be delivered. However, like most services, virtualization can be delivered via mechanisms other than cloud computing.

Yeah, I know it sounds like a marketing sound bite. But its short and I believe gets the point across. How I came down to this involved revisiting my What is Cloud Computing blog post from last May and comparing that to the solutions offered currently.

The Definitions

There are two published definitions of cloud computing. The NIST is working on their version and Gartner is sort of working on theirs. Here are the highlights:

NIST – On-demand self-service, Ubiquitous network access (internet standards based), location independent resource pooling, rapid elasticity, and measure/metered service

Gartner – service-based, scalable and elastic, shared, metered by use, and uses internet technologies

Pretty similar on the whole. Now for Virtualization. From Wikipedia:

Desktop virtualization is the use of virtual machines to let multiple network subscribers maintain individualized desktops on a single, centrally located computer or server. The central machine may be at a residence, business or data center. Users may be geographically scattered but are all connected to the central machine by a proprietary local area network (LAN) or wide area network (WAN) or the Internet.

Straightforward enough.

Where my thoughts took me

As I pondered how to really help clarify the differences, I was troubled. Many cloud solutions, both public and private, leverage virtualization to deliver their functionality. Amazon’s EC2 is all about virtualized infrastructure. Microsoft’s Windows Azure uses virtual instances of a customized version of Windows Server. Its no wonder that folks were confused.

I opted to focus on what was being delivered, what was the service being provided. If I virtualized a server, it wouldn’t have the bulk of the attributes listed by NIST or Gartner. Its just a box sitting out there on a network that I can access remotely. Its not inherently scalable or elastic. Its use is not metered, and it only lives where I have it running.

However, various services do allow me to create virtualized resources. And these services do demonstrate the various attributes of Cloud Computing.  So while Virtualization may be both a means to deliver cloud computing and a solution delivered by it, virtualization is not inherently, in and of itself, a cloud computing solution.

Short and sweet

So there we have it. A simple way to explain the difference and hopefully help clarify some of the confusion that is out there. Its a major simplification of course. There are different types of clouds just like there are different types of cloud computing solutions. But I do believe this is a simple enough definition to act as a starting point.

.NET Services – Introduction to the Service Bus

Darned if this post hasn’t been rough to write. I don’t know if its my continued lack of caffeine (quit it about 10 days ago now), or the constant interruptions. At least the interruptions have been meaningful. But after 2 days of off and on again effort, this post is finally done.

As some of you reading this may already been aware, I’ve spent much of my spare time the last several weeks diving into Microsoft’s .NET Services. I’m finally ready to start sharing what I’ve learned in what I hope is a much more easily digestible format. Nothing against all the official documents and videos that are out there. They’re all excellent information. The problem is that there’s simply too much of it. :)

Overview of .NET Services

Lets start with this diagram from Microsoft

servicesPlatform

As you can see, Microsoft’s Azure Services is comprised of various components. I’ve already covered Windows Azure, the Platform as a Service (PaaS) component in other posts. However, a key point in Microsoft’s cloud messages is the notion of Software + Services or S+S. If Windows Azure or any of the products listed across the top are the software, then the blocks in the middle represent the services. I’ve spent the last 2 weeks doing my best to get my head wrapped around .NET Services. The work will continue for many weeks to come (I don’t have the luxury of being able to devote myself to it full time), but I’m looking forward to uncovering even more of this great component of Microsoft’s Azure Services.

.NET Services currently consists of three different features:

Service Bus – We’ve all heard of the notion of an Enterprise Service Bus (ESB). Well this is an Internet Service Bus. It provides a way to enable communication between processes even with firewalls, NAT, etc… protecting our intranet resources from the perils of the cloud.
.NET Access Control Service – Of course what good would being able to access things across the bus be if there wasn’t a way to help secure them. This service can handle authentication and then provide back the user’s claims.

.NET Workflow Service – Workflows are great. But for the cloud we need to ensure that they are scalable and we’d also like to be able to integrate them easily with our Service Bus.

The Service Bus – in 60 seconds or less (ok, 90. I read slow).

The purpose of the Service Bus is to give developers a way to easily overcome the challenges of working across internet security barriers such as firewalls and NAT. Thus allowing them to quickly implement secure messaging between processes regardless of where those processes reside (on-premise or in the cloud). This is the feature I’ve spent the bulk of my time looking into so far. And there’s a ton here. Course is doesn’t help that I really hadn’t done anything with WCF prior to learning about the Service Bus.

Because its a service bus, it has a service registry to support discovery and can handle access control and routing. However, the service bus is not just a single thing. Its actually comprised on a three distinct pieces. There’s the relay, used to move messages between running processes. Next, there’s routers, used strangely enough to route messages. And lastly (and most recently added), there are queues which provide an asynchronous, persistent method of delivery messages between processes (similar but more robust then the queues in Azure Storage).

Of course, none of these operate on their own. Behind the scenes, the Service Bus works with the Access Control Service and SQL Data Services. The ACS handles authentication and publication of the claims, while SDS provides persistence.

Controlling Access

All access to endpoints exposed via the bus, regardless of their type (relay, router, or queue) must be secured. The Service Bus already recognizes the Access Control Service as a trusted authority for claims based authentication. In turn, the ACS trusts Windows Live ID. Future releases are supposed to support AD Federation Services (Codename: Geneva) and private Active Directories. It also has its own built-in identify provider that does simple userid/password authentication. This built-in provider is supposed to go away eventually. However, I can easily see if being persisted into commercial release and beyond. I also suspect we’ll see support for non-Microsoft solutions like Tivoli in future releases/update.

The ACS uses claims based authorization. You present some credentials, they are authenticated, and then a list of claims is digitally signed and returned to the caller. These claims are then used by such items as the Service Bus to determine what a user can access.

The ACS is a big topic that I’m not prepared to get into just yet. However, I will say that you can interact with the ACS via WCF, REST, or HTTP. There’s also supposed to be an ATOM interface, but no details on it have been published as yet. I’ll get into this area in more depth in future posts, once I’ve figured it out a bit more. :)

Workflow for the Cloud

And the final component is the ability to create your own .NET Workflow and deploy it into the cloud. The advantage here is that these workflows, already in the cloud can then be used to help coordinate the interaction of services wherever they might live. The other advantage is that because they reside within Microsoft Azure, they have the scalability needed for a robust cloud implementation. If they’re not needed, they aren’t consuming resources. If 100 instances are needed, they are spun up to meet demand.

As solutions demand more and more services, not all directly under your control, coordinating them will be increasing important. The .NET Workflow service will help meet that need. It will leverage its cousins, the ACS and Service Bus to assist with those duties.

I haven’t really dug into this item at all yet. But I’m keenly interested in it. So you’ll be seeing more of it in the hopefully not too distance future.

Update: Workflow will be going offline in July 2009 as the team prepares for their next milestone (after .NET 4.0 ships). For details, go to http://blogs.msdn.com/netservicesannounce/archive/2009/06/12/upcoming-important-changes-to-net-workflow-service.aspx

*ding* Your brain is full

I wish I could share with you exactly how much my brain starts swimming when I begin to ponder what could be accomplished with these services. If Windows Azure allows you to put applications into the cloud, these services are the glue that can help bind applications into cohesive business solutions. I’m really looking forward learning more about the various services via some hands-on practice and exploration. And you can be certain I’ll be sharing this with you ever step along the way.

BTW, it took this blog over 4 months to go from start to 500 pages viewed. In just over a month this has doubled to over 1000. Thanks to everyone that’s been visiting and sharing this blog. A special thanks to Roger Jennings of Oakleaf Systems. I continue to be impressed by Roger’s work and I’m flattered that he finds my posts valuable enough to continue to link to. He’s directly responsible for much of the traffic I see and I can’t express how much I appreciate his support.

As always, if there’s something specific you’d like to hear more on, just let me know.

Cloud Computing – backlash against infrastructure constraints?

I’m quitting caffeine. As a result my neural processes have been operating at a deficit. So I’m going to blame that for the completely random thought that popped into my head a couple days ago. I’m going to blame the amount of learning I have to do regarding .NET Services for why you’re getting to hear about this idea instead of another hands-on Azure related post.

Details on the pricing of Azure Services is supposed to be coming in July (at the partner conference). As such, I’m spending more time trying to predict what solution offerings will appeal to clients looking to take advantage of the cloud. That of course requires that I try to determine their motivations. This notion I had centers around an incident that I was caught up in several years ago.

I’ve been doing IT for over 17 years now. I joke that I’ve done everything from Mainframe to Mobile (which isn’t actually a joke). I think the incident that stands out most in my mind was when the sales team at a former employer started a project outside of the company’s IT department. This “small project” quick grew and after about 6 months was staffed by a team of nearly a dozen high-priced consultants. I won’t get into the details about what happened to the project or why. The important thing is how this happened. The sales group had a need that the IT department was unable to satisfy. So they found other resources to meet their needs.

So I thought back to the above example and could not help but ponder if one motivation for moving to the cloud was this “need” to not be limited by existing infrastructure. How many folks will look to the cloud not because of cost, or features, but simply because the near endless resources it brings mean that they are no longer bound by the constraints imposed by their existing infrastructure. They can operate outside of enterprise infrastructure governance and budgeting.

This is not that far a stretch if you listen to the marketing messages some of the cloud providers are tossing around. They tell how cloud computing can enable you to deploy new solutions without having invest in additional infrastructure. I can easily see some project manager, under the gun to get a solution delivered being frustrated when they are told that a server needed for their project has had its delivery delayed or that it can’t be installed in the datacenter until the electrical contractor completes an upgrade. They’re see all these resources out there in the cloud just waiting for the swipe of the corporate credit-card. Who wouldn’t be tempted by this?

This doesn’t in any way belittle what cloud computing has to offer. In fact, it just helps strengthen the case for cloud computing. However, it does add a note of caution. We need to make sure that the eager project manager understands the pros and cons of moving to the cloud. We as IT professionals need to make sure we’re helping decision makers fully understand what is required when leveraging the cloud. If we don’t, then we risk having to help clean up the messes that are created when decisions are made without full understanding of the potential risks.

We have to help enable our organizations. But we are also obligated to make sure it is done responsibly.

Ok, thought has been shared. So back to my .NET Services learnings. *cranks up the Rammstein and Rob Zombie* That is if I can stay awake. :)

A last word on Azure Queues (Performance)

First off, my most sincere apologies for taking so long to get back to this blog. Between work and my personal life, my time has been more limited then I would have liked. It also doesn’t help that this last project took substantially more effort than I had hoped. But after long last, here I sit on a lovely Sunday morning putting this post together to share my results. I’d like to preface by saying that while I am happy with some aspects of what I’ve done, there’s still much left to do. Hopefully I’ll be able to come back to it at some point.

Some time ago, someone came by the MSDN Windows Azure forums and asked a question regarding performance of Azure Queues. They didn’t just want to know something simple like call performance, but wanted to know more about throughput, from initial request until final response was received. So over the last month I managed to put together something that lets me create what I think is a fairly solid test sample. The solution involves a web role for initializing the test and monitoring the results, and a multi-threaded worker role that actually performs the test. Multiple worker roles could also have been used, but I wanted to create a sample that anyone in the CTP or using the local development fabric could easily execute.

My test app features several improvements over my previous queue examples. I added a “clear” method to my AzureQueue class. This helps make sure that queues get emptied when I need them to be. I also made sure my queue messages were built as objects. I created a base class, QueueMsg which would handle the work of serializing and de-serializing messages for queuing. Then created two specific dependent classes, StartMsg and StatusMsg. This approach allowed me to create a message object, set its properties, then send it to the queue easily. Like so…

            // build message

StartMsg tmpStartMsg = new StartMsg(int.Parse(ddlMsgSize.SelectedValue),

int.Parse(ddlIterations.SelectedValue));


            // serialize msg and put into queue
            string tmpMsg = tmpStartMsg.Serialize();
            _RequestQueue.Put(Server.UrlEncode(tmpMsg));

As usual, I’ve kept my implementation pretty basic. I didn’t put XML attribute tags into my classes so the defaults are used. I also didn’t put in any checks to make sure my messages don’t exceed the 8k size limits of queues. You’ll want to strongly consider both these issues when dealing with any real world implementation.

Next up was figuring out how to actually simulate the load. I decided to go with using a worker role that contained 3 threads. The root thread would be the controller and responsible for initializing and monitoring the processing as well as reporting the progress back to the web role. There would be an initiator sub-thread that starts the loop and watches for the close of our processing loop. Finally, there is the relay sub-thread which does nothing more then look for a message from the initiator and relays it back. As with my queue message class, both these sub-threads are based on classes that inherit from a single base class.

The final piece was the parameters of the test. We wanted to vary both the size of the message (aka payload) being sent, as well as how many iterations to execute.

I’m not going to post all the code here in the blog, but I have uploaded it so you can pull the project down and test it yourself.

Course the end result is to get some data on queue performance:

  Iteration: 1000
  Total Execution Time: 00:05:45.9060000

  Avg. Read: 60  (in ms)
  Avg. Write: 36  (in ms)
  Avg. Delete: 42  (in ms)
  Avg. Round Trip: 345  (in ms)

  Fastest Read: 31    Slowest Read: 203
  Fastest Write: 31    Slowest Write: 234
  Fastest Delete: 0    Slowest Delete: 593

All in all, I’m fairly pleased with these results. IMHO, Queues are not meant to be a bulk processor of data but simply a reliable method of delivering messages between processes. These tests show that Azure Queues definitely serve this purpose, especially given that it exists in the cloud. Results will be higher if you’re running things locally and accessing hosted storage, but they’re still respectable enough. I can honestly see many future uses for queues and expect that they will play a key role in many Windows Azure applications.

As I mentioned, this solution isn’t as mature as I’d like. There are a couple issues with the web role that should still be worked out. It has problems with response messages being displayed out of order as well as an issue with messages not always being deleted properly after they are read from the queue (likely causing the out of order issues). This is annoying, but fortunately does not impact the final results if you let a test run through completion.

The final thing that still confounds me is the delay before we start seeing the results come back. Its almost like there is a substantial delay between when the web role sends the request via a queue to the worker role and when the worker starts processing. I’ve pondered it a bit and all I can think of is that since Azure Storage runs as a scalable cloud based service, there is a delay between when two processes (which are presumably being routed to different instances of the Azure Storage services) send messages due to caching. Its also possible that the delay is due to delays between when a queue message is written to Azure Storage’s backend and when its then visible to be read. If anyone from the Azure team knows of an answer to this, please let me know. :)

Sorry again for the delay in getting this posted as well as the poor state the code is in. But at least its finished and I can now move on to my next big topic… .NET Services. :)

Ug, sorry for the lack of an update

Things have been very hectic of late. I’m transitioning between clients, my wife’s grandfater passed away, and my folks are coming to visit this weekend. So unfortunately I have had little time to finish up my next hands on (an Azure queue performance test) or focus on the .NET Services service bus.
 
Have faith, I’ve become too attached this this little space to walk away from it. Just need to get real life dealt with first. Still alot of Azure topics I want to visit. Heck, I’m even weighing an idea I was given today about trying to host a session at the next Twin Cities code camp.
Follow

Get every new post delivered to your Inbox.

Join 1,147 other followers