Wednesday, December 3, 2008

Sadly no Hyper-V on Vista SP2

I was really happy after reading Mary-Jo Foleys post on Vista SP2: What’s inside? which indicated that Vista SP2 would include Hyper-V support.

Unfourtunately my crappy laptop is only x86 I searched around abit and a found this little tadbit of info at the springboard:

Hyper-V *
Windows Vista SP2 includes Hyper-V™ technology, enabling full virtualization of server workloads

*To clarify, Hyper-V is not included in Windows Vista SP2, it is part of the Windows Server 2008 service pack. This means that when you install SP2 for Windows Server 2008, or if you install a slipstreamed version of Windows Server 2008 with SP2, the RTM version of the Hyper-V role will be included. Hyper-V was released after Windows Server 2008, which means that the role you currently install is a pre-release role and needs to be updated to bring it up to RTM. This update will be applied (only if necessary) automatically when you install SP2.

So I guess no Hyper-V on Vista :( and since there started to popup postings about this I thought I'd just share this before people start formating the hard drives in euphoria (I know I almost did).

Monday, December 1, 2008

Random thoughts on how "Oslo" is going to affect the every day life of an developer on the Microsoft platform...

I've been pondering abit on how this "Oslo" business is going to affect me as a developer, in what ways will it change the way I architect and implement the applications I'm working on? I'm sure that this will be a journey that will take a while and surely will result in further postings on the subject.

The eminent mister Aaron Skonnard has written a looong and very well written article Introducing "Oslo" that I recommend that you read. I will not go into that detail but rather skim the surface and inject a few of my own reflections on how this will affect me as a developer.

"Oslo" is a modeling platform that consists of three things:

A Language called "M" used for authoring models and textual DSLs. I think Don Box summed it up is pretty good as usally in his presentation of the language at PDC08.

He states that is a language about data, in fact the process of capturing, schematizing and transforming data. He also clearly states that "M" is not a object oriented language and neither a replacement for T-SQL.

"M" actually consists of three parts: MSchema which is what you use to schematize your data, MGrammar lets you create textual DSLs and finally MGraph which is what is the compile result of the input to the "M" compiler.


A tool called "Quadrant for interacting visually with models. This is a very slick WPF based application which is extremly customizable. It looks very nice but it feels a little bit like drowning :) when you start using it.

Hopefully this will be easier to work with when it release since some of the slides at the PDC08 hints at different SKUs are to be made avaiable but I am speculating here so don't hold me acountable :) the once mentention where:

Quadrant Web Editor (ASP.NET)
Quadrant Service Editor (WCF/WF)
Quadrant Entity Editor (Entity Framework)
Quadrant Schema Editor (SQL/XML)
The way you work with "Quadrant" is to create workspaces of infinite size that are zoomable (a really cool feature is to popup a model element to the foreground, letting you work with a part of the model while have a nice overview picture of the complete model). The vision behind this is to be working with large wall mounted multitouch screens in the conference rooms (remember Minority Report, we are not that for from this now).


A Repository for storing and sharing your models. The vision is that most products from Microsoft will use the repository for storing their configuration data and such, first out is the new application server "Dublin" and then in future versions System Center and Team Foundation Server has announce that they will move towards using the repository.


So to be able to start talking about what we can use this model driven platform for we kind of need to define what a model is, or atleast what Microsoft are talking about in regards to "Oslo". You can categories models into three general categories:

Models for communication, this is typically you boilerplate UML diagrams of your applications that you produce upfront and often then forget about. The main thing here is to communicate intent when creating your software and bridge the gap in between users and developers.

Models assisting development, pretty much when you take the communication models and generate code from this. Serves as a great kickstart when working with greenfeild development but in my personal experience the efforts that went into reverse enginering of code into model and vice versa (like in tools such as Rational Rose) never really succeded altough people keep trying.

Models driving development, this is about declarative programming and there is a whole slew of examples such as HTML, CSS, XAML, BPEL, .NET Attribute, .NET Configuration and COM+ to name a few. This is the space where Microsoft wants to change the way we do software with "Oslo".

A few examples of applications that are model driven (or data driven if you prefer it) you can look a Microsoft Sharepoint and Microsoft Dynamics. Both these applications are all about customization and they are driven by a repository containing their models. You have most likely written something similar yourselves (I know I have) maybe not on the magnitued of Sharepoint but still model driven applications are not really not that uncommon. The definition of model in this context is that the way the application looks and functions is driven by a model in some form of a repository).


That's enough background for now, what are we gonna do with this Oslo stuff then? Well for one thing we will be using it all over the place when working with various microsoft products. It's not going to change our everyday situation as developers very much since we will continue to work in pretty much the same fashion as previously, but there are a whole new range of integration possibilities that opens up once we go for the central repository with the models both for us and for Microsoft.

I can see three areas of usage for the everyday developer using Oslo today:

Modeling our domains in a more friendly fashion and then use the complier to generate the SQL statements needed to create the database. One thing that will be a challenge with this however would be dealing with none greenfeild development and evolving your schema overtime. From what I have seen so far these areas are yet to be solved fully.

One really nice feature here would be the fact that we can really easily create a textual DSL for working with the domain that our users could understand and use for populating the needed configuration data, we might be able to simplify data entrance to such a degree that writing a maintenance client wouldn't be needed. I seriously doubt that this would work for more complex data, but it is still an intressting idea since we could infer compliance through the DSLs syntax and thus prevent corrupt data from being inserted.

Simplify and automate development, this is an area where "Oslo" really shines. Since it is so easy to create a textual DSL and with the available framework that we are provided with we could easliy create an simplified version of for instance WiX (which is in itself is an abstraction over MSI) adding yet another layer of abstraction over the installation process.

Altough you can almost always speed up development or solve a tough problem using another layer of abstraction there is one drawback with this approach and that is the fact that the more layers of abstraction the more power over the details we loose. Thus abstracts like these pretty much always leads to homogenity across an domain which is good for performance but bad for innovation.

I can personally see a bunch of places that we can use "Oslo" in our own frameworks we have build at my work. In these situations we strive for homogenity and speed of development and really the reason for not rolling your own little DSL is the sheer costs assosiated with writing compilers and such.

Drive our application with models, this is the most intresting one in my oppinion. I remember an application that I wrote back in 1998 where we wanted to produce a dynamic user interface based on templates (we even went the extra mile to build a template designer as well) using java and reflection and even though this was a very fun ride it cost alot of money to implement. Had we done this targeting "Oslo" and WPF instead it would have been a breeze in comparision you can check out Josh Williams post Using MGrammar to create .Net instances through Xaml for a really cool example of dynamically producing XAML using a textual DSL.

Erik Wynne Stepp also wrote a good post on the subject of dynamic interfaces and the impact of "Oslo" which is a good read as well.

If you want to get more information about what "Oslo" is about you can watch and read the following material:

"Oslo" Developer Center

A Lap around "Oslo"
"Oslo": The Language
"Oslo": Repository and Models
"Oslo": Building Textual DSLs
"Oslo": Customizing and Extending
the Visual Design Experience


First Look at M – Oslo’s Modeling Language
First Look at Quadrant - Oslo’s Modeling Tool

Models Remixed

MGrammar in a Nutshell
MGrammar Language Specification

Thursday, November 27, 2008

Will the real slim shady please stand up? ... (VMWare Lab Manager vs Microsoft Team Lab)

I've been nagging myself for awhile about why I felt that the offerings of Team Lab felt so familiar and I just remembered why.

Sometime around spring 2007 I had a peek at a product called "VMWare Lab Manager" to see if we could benefit from it when virtualizing our test lab environements, when never ended up going down that road but it had more to do with the fact the we didn't have the energy to introduce yet another product at that time.

Guess what though the products are extremly similar just take a look at one of the key goals with Lab Manager:

Capture and Reproduce Software Defects—Every Time
Enable developers and testers to quickly reproduce software defects and resolve them earlier in the software lifecycle—and ensure higher quality software and systems. VMware Lab Manager enables “closed loop” defect reporting and resolution through its unique ability to “snapshot” complex multi-machine configurations in an error state, capture them to the library, and make them available for sharing—and troubleshooting—across development and test teams.

I do believe that Microsoft have been heavily influenced by Lab Manager when designing their Team Lab SKU of Team System, as you can see in the architectural overview of Lab Manager below you get pretty much the same offerings in both products.



One thing that bugs me though is the fact that neither Micrsoft nor VMWare has put the effort into integrating Lab Manager with TFS which has been done with other ALM suites:
Integrate with Leading Test Management Tools
Enable users to access VMware Lab Manager seamlessly from within their preferred test management tools. Off-the-shelf integrations with Borland SilkCentral Test Manager and HP Quality Center allow users simply to select the desired multi-tier configuration and Lab Manager will do the rest — automatically provision the test environment, tear down the environment after a test is run, and capture the state of the application, test date and virtual machine configuration in the event of a failed test.

Maybe this is something for the TFS community to produce? I'm intressted if someone can provide a enviroment for testing with licenses for the VMWare stuff since I don't have access to them myself (if you are intressted in a codeplex project about this let me know).

So why am I writing this post? Well since the idea about eliminating the dreadful no-repro or it-works-on-my-machine scenario appeals to me, I wanted to make sure that people are aware of the fact that VMWare has an offering in this area as well. Also the fact that you don't have to wait until VSTS 2010 hits the streets is a big plus.

Another good thing is since Team Lab is going to work with both VMWare and Hyper-V virtualization any effort you put into working with Lab Manager today will easily migrate to Team Lab in the future. There is one drawback with the Team Lab SKU as it looks today compared to Lab Manager and that is the fact that it is very integrated with the new test client (codenamed "Cameo") and as far as I have seen there are no web based management for the Team Lab stuff (yet at least), personally I believe it should be split from the test management since in my experience it is not the testers that provision the lab environments.

No more "Death-by-PowerPoint" ... Or how to improve your presentational techniques




A while back I did my first public presentation at a conferance and I just didn't feel I managed to pull it off as well as I had wanted todo. So I started to search around for some material and ended up buying a bunch of books that I just finnished reading.

Since I'm personally comitted to not accidentally causing anymore "death-by-powerpoint" I'll keep posting about my sucesses and failures when ever I feel I have something to contribute with on the subject of presentation design and delivery.

Slide:ology by Nancy Duarte

This book is truely a must have for all us non-designers that still want to make our presentations memorarble for our audiences. Nancy Durate (one of the persons behind Al Gores successful climat change crisis presentation) has literally poured 20 years of knowledge into this book, the book it self is beutifully presented and is a joy to read.

It works splendidly as a reference book on your desk for when ever you need to be creative and put together your presntations. I believe her husband is very much right in this quote from the forward of the book:

...slide:ology is destined to become the desk reference for building effective presentations and is a must read for all who present...

For more information goto slideology.com

Presentation Zen by Garr Reynolds

I must say that this book was truely a joy to read, I actually read it from cover to cover in on sitting (almost had to take care of the kids during the day so there where an breif period of none reading).

The book is all about a state of mind in my oppinion. The author does not present us with a method that we are to follow rigorously to be successful, rather it gives us various pointers on how to create good presentation and become a better presenter.

Some key take aways are:
Less is more, keep it Simple.

Go Analog turn off the computer and start with pen and paper.

Design matters and it is not the icing on the cake, it's the foundation.
A picture says a thousand words.

Put your self in your audience clothes why are they there?

If you are serious about becoming a better presenter you should get a copy of this book since it will most definitly inspire you. For more information goto presentationzen.com

Beyond Bullet Points by Cliff Atkinson

This book is all about method and it gives you a straight recipie for creating presentations according to the bpp way, include a bunch of templates for getting started.

Altough it contains alot of really good ideas which I am sure I will use the next time I create a presentation, the book itself is a rather boring read and uses way to many words to get to the point. I ended up skimming the book instead of reading it from cover to cover. This is kind of sad since I believe the author could have conveyed his message in half the pages or less. It is still a book that you should have read if you are working with slide based presentations.

Some of the key ideas in the book has to do with:
Structure and how the brain processes information.

3 is a magic number.

Visual keys through the presentation and important.

Headlines and illustrations, keep the ammount of information on the slides minimal.

I wont go into detail about the method since I expect that the author would not like that, so for more information goto beyondbulletpoints.com

Apart from reading the books presented in this posting you should start to hang around on a couple of sites as well ...

SlideShare.net for studying other presentation designs.
iStockPhoto.com is a great place for finding graphics for your presentations.

Friday, November 21, 2008

Test Lab Environment Automation In The Cloud (SkyTap Virtual Lab)

Yesterday when I was digging into some more details around Team Lab (the newest SKU of Visual Studio Team System) I stumbled upon something that I sure wish that I would have had access to when we first started to virtualize out quality assurance lab environment.

A company called SkyTap announced a product called Virtual Lab back in april 2008 (read about it here). The product aims to provide a virtualized lab environment in the cloud, this has some real potential and I will surely look into this as a platform for our future lab environment.


What I find most attractive in a product like this is three things:

1, It will ease the demand on the operations department when it comes in house expertise of virtualization technology (this is particularly needed for smaller and mid size companies who simply can't afford to staff that type of compentency).

2, The self service provisioning model where you add the needed resources you want as you go and simply pay for what you use. No more large captial expenditure requests we can instead transfer the costs for this onto the running operational costs. This is also a very big deal for agile teams in my oppinion, since one of the problems we have is the fact that the preasure on the resources are very high on and off when running multiple agile teams where all the teams have a dire need for their own environment.

3, We get a library with baselined virtual images which will save us tons of time in configuration and trying to create these our selves.

Otherwise the product is very similar to VMWare Lab Manager, it allows us to create labs consisting of multiple machines. We can easily create snapshots of the environent when ever we find a bug and attach a link to that snapshot to our defect report, which the developer later can bring back to life to investigate the in the same environment as the tester. We get access to a REST based automation API for our lab environments and much more.

Another intressting tidbit is the announcement they made at the PDC08 about their integration with TFS. Nothing fancy but the have a custom control which we can embed in our work item type forms that will show the available snapshots so we can get the link straight in the Visual Studio IDE and double click it to get to it (if you want to get a peak at how it look watch this screencast)

Tuesday, November 18, 2008

Clueless about Azure? Grab the Azure Services Training Kit and give it ago!

Just stumbled across the Azure Services Training Kit which looks really nice as a starting point to playing around with the Azure Platform:

The Azure Services Training Kit will include a comprehensive set of technical content including samples, demos, hands-on labs, and presentations that are designed to help you learn how to use the Azure Services Platform. This initial PDC Preview release includes the hands-on labs that were provided at the PDC 2008 conference. These labs cover the broad set of Azure Services including Windows Azure, .NET Services, SQL Services, and Live Services. Additional content will be included in future updates of this kit.


Download it and start playing...

SQL Services: Codename "Huron" - Sync Enabled Cloud Data Hub

As I mentioned in my previous post the Microsoft Sync Framework and the SQL Services guys has teamed for some cool projects for the cloud.

Codename "Huron" which is one of them seems to be the answer to one of my intial questions I've been thinking about, namely the ability to maintain an application on-premise and using the Azure Platform for extending this application to handle peak loads or simply slicing of parts to run in the cloud.



As you can see in the picture "Huron" sits in the cloud acting like a master data hub allowing us to easliy build speaking to the local sync providers that either ships with the project. Currently they have build one for Access and SQL Server Compact but this will be extended to include SQL Server as well (as you can see in the quote below).

Leverage the power of SQL Data Services and Microsoft Sync Framework to enable organizations and individual workers to build business data hubs in the cloud allowing information to be easily shared with mobile users, business partners, remote offices and enterprise data sources all while taking advantage of new services in the cloud. This combination provides a bridge, allowing on-premises and off-premises applications to work together. Using “Huron”, enable sharing of relational stores like Microsoft Office Access, SQL Express, SQL Server Compact, and SQL Server, enable B2B data sharing, and push workgroup databases to field workers and mobile users.

The driving technology behind this project is the Microsoft Sync Framework so if youe not entirely up to speed on that you could start by having a look at the following article, Introduction to the Microsoft Sync Framework Runtime.

The configuration of the syncronization process is highly flexible since you can decide which tables you want to put in the cloud and you will be able to autosync bi-directionally or put the syncronization on a scheadule.

Sadly the download like is not yet available on the project homepage, but as soon as it is I will start to take it for a spin which I would like to encourage you guys to do as well this is an important building block for allowing a smooth transition path in between on-premise and the cloud.

For more information about the "Huron" and other interesting SQL Services incubation projects be sure to keep tabs on SQL Services Labs

Sunday, November 16, 2008

SQL Services: Codename "Anchorage" - SyncToy Moves To The Cloud

The Microsoft Sync Framework and the SQL Services guys has teamed up to produce some rather intressting incubation projects the first one is "Codename "Anchorage":

We’re evolving the popular SyncToy application to enable much more than just file/folder synchronization between PCs! With this project, providers will be able to register and be discovered in a variety of sync groups including contacts, files, favorites, videos, as well as synchronization across services such as the Live Mesh, PhotoBucket.com, Smugmug.com, and more. Powered by the Microsoft Sync Framework - this E2E and hub for sync providers has value for both consumers AND developers...
This project aims to provide syncronization between services. It's a provider based model that allows us to create so called sync groups to allow for greater flexibility when dealing with multiple sources and heterogenous data. At first glance I looks like this should really be integrated with Live Mesh.

Unfortunately the download link is still not public so I guess we will have to wait and see what gives, I'll probably post some more on this topic once the bits are available.

For more information about the "Anchorage" and other interesting SQL Services incubation projects be sure to keep tabs on SQL Services Labs

Friday, November 14, 2008

Missing my custom icons for my desktop IE links (...hint they are called favicons...)

Finally I managed to get around to wrap my head around a little but really annoying thing that occured to me when upgrading to Vista on my laptop. After upgrading an restoring all my favourite website links on my desktop all the icons reverted to the big blue E which is the icon for IE, at first I thought it was related to the switch to Vista but it seems it's a IE7 related issue as you can read below.

Anyway I'm sure there are still a few more poor suckers like me that still haven't figured it out yet so I thought I share the joy by posting about it...

The reason behind this behavior is the following (you can read more about it here)

Because the shell asks for 48x48 icons, but favicons are 16x16. Stretching them would have looked bad. This decision was made late in the IE7 cycle. Many people have complained and we are considering a fix for a future release.
The remedy is really simple just do the following: Right click on your desktop and select the views menu and select classic icons and of you go (the picture below shows the menues in question):

Unfortunately the classic icons are smaller 16x16 and doesn't look that nice but I still prefer to have the customized icon of the site in question so I can find the link fast when looking at my desktop.

Thursday, November 13, 2008

Windows Azure an introduction and what will it mean to the corporate developer? (Part 4)

This will be the final installment in my post about the Azure Services Platform, this time I'm going to talk about some of the building blocks available (in my oppinion the more interesting ones) namely .NET Services (formerly known as BizTalk Services) and SQL Services (formerly known as SQL Server Data Services).

I will not talk about Live Services mainly because I haven't had the time to look in more details about it but I might get around to that later on since there are some interesting aspects to some of the offerings in there even for a more corporate centric application.

.NET Services - ServiceBus

This very cool piece of technology gives us a servicebus in the cloud which may change the way many companies will solve their integration scenarios in the future.

The servicebus is not about hosting your services in the cloud, but rather making on-premise services publicly available in a really easy way. Everything is based upon WCF so your previous investments into this technology really pays of, the only thing we have to do is to change a binding, so exposing a already existing service using the servicebus in really a oneliner.

In the current CTP release the focus lies on non durable communication, fourtunately Microsoft talks about implementing both durable multicast and something called anycast (which would be the first available subscriber will get the message). Personally I think the lack of durable multicast limits the servicebus somewhat in B2B integration since there are not so many scenarios when a partner or customer will be satisfied to only get their messages incase their apps are up and running. So we really need the durable multicast for this before usage will really pick up in this area.

Apart from providing a volatile multicast mechanism we also can easily expose an on-premise service through the bus even though we are behind a firewall (it even works with NAT). Another really cool thing is the magic that they are working to establish a direct connection (you can configure this behaviour) even though the caller and callee both are behind a NAT.

.NET Services - Workflow Services

This is another service that have great potential. Apart from providing us with an scalable hosting environment we should be about to constuct really elegant solutions using a workflow to orchestrate a set of services and providing new functionallity based on that. Or we simply want to massage the data somewhat before sending one or more versions of a message onto the servicebus.

Unfortunately in the CTP they only allow fully declarative workflows, that means no custom activities which will limit the use somewhat. The subset of activities is rather small and contains the basic controll flow stuff and a bunch of HTTP and XML helper activities that are new in the Azure Platform.

We will be able to host WF 3.5 and beyond and the deployment process is really a breaze, you simply rightclick on your workflow in Visual Studio to deploy to the cloud. Once you have your bits deployed you can manipulate the workflow types and instances through a management portal, unfortunatelly this portal is not suitable for large volumes of running instances and we are left (atleat as it looks now) to implement a better management client ourselves (luckily the management apis are avaiable to us).

.NET Services - Access Control

Provides us with a Security Token Service (STS) in the cloud that can provide fedrated security by providing integration with a wide variety of different source such as Active Directory, Live ID, Cardspace and in the future Open ID as well.

It works with the Servicebus, Workflow Services and SQL Services, providing us with a consistent access control model through out the breath of the building block services provided in the Azure Platform.

There is alot happening in this area with the "Geneva" framework and server offerings which I haven't had the time to drill down into (honestly security is a nessecary evil :) ... nah but it is not as fun as the workflow and servicebus stuff so getting around to details ain't on the top the list yet).

SQL Services

Aims to provides us with a database in the cloud, currently it is very similar to the storage offerings in Windows Azure with the main difference being that SQL Services are build upon SQL Server (as the name impiles). The way to get access to the data is through a REST API (or if you like ADO.NET Data Services).

The storage model available now is really a hierarchical model that looks like this ... At the top we have a authority which can contain one or many containers which in turn consists of on or many entities that are a collection of typed name value pairs. Right now we are limited to the following scalar types: string, binary, boolean, decimal and dateTime.

In the future we will get support for more SQL Server functionallity such as reporting, analysis and much more.

Tuesday, November 11, 2008

Windows Azure an introduction and what will it mean to the corporate developer? (Part 3)

This part of the post will talk a bit more in detail about Windows Azure. The pictures I use in this post comes from the excellent white paper Introducing The Azure Services Platform written by David Chappell.

So what is Windows Azure? Microsoft them self defines it like this:

Windows® Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Azure Services Platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage Web applications on the Internet through Microsoft® data centers.
Or simply put its an operating system for the cloud where the cloud basically a group of server (typically a large number of them). Initially it will be available as a service running in the datacenters of Microsoft, altough there where some hints that this might be made available for others at well to run in their own datacenters as well.
As the picture indicates the central piece in this is the part called the fabric controller which handles the automated service management and makes sure that you applications are provisioned in the way that you have specified.

These specifications are done using models that specify this such as topology information, health contraints, logical resources and so forth. As far I could tell they where not done using the "M" language yet but the intentions are that eventually all this will migrate into a cohesive whole. These models are then used to handle automated deployment and monitoring of your applications.

As you can se in the picture above there are two ways of deployment into Azure namely the Web Role (used for the public endpoints of you applications) and the Worker Role (used for async work normally triggered by either listening to the servicebus or polling a queue in the storage system).

In the current CTP release you can only deploy ASP.NET applications and .NET code, in the release version of Azure Microsoft intends to provide the possiblity to deploy PHP based application and support for unmanaged code as well. It worth to notice that the code is not running with full privelages but is running in a special sandbox mode that is similiar to what you get from todays application hosting environments.

Along with all these goodies we get access to a scalable file system as well, its not to be confused with the SQL Services which intends to provide a database in the cloud. Much in the same fashion though the access APIs are via REST interfaces and are accessible either direct from your code running with in Azure as well as your on-premise application. Very quickly you can decribe the different types of storage available like this:

Service Data provided by blob support much like a regular file in your on premise environment (in future versions we get support for filestreams as well).
Service State provided by tables, which aren't really table :) kinda confusing but it is simply a hierarchical structure consisting of enteties/properties/named and typed value

Service Communication provided by queues which are exactly what it sounds like a regular oldfashioned queue where the web role typically posts something that gets picked up by the backend worker role.

You access everything through a fancy web portal which looks nice in the CTP but leaves you with the impression that it will be a little painfull to deal with a major installation using the portal since it doesn't lend it self to well to large ammounts of data.

Finally one of the coolest things if your a dev like me is the fact that you get a development fabric which is a complete simulation model for Windows Azure and lets you test out your code in a distributed fashion before deploying to the cloud. Simply put you get "The Cloud on your desktop" fully integrated with your favourite IDE Visual Studio.

I'm really looking forward to getting hold of an account to actually trying out the bits for real, unfourtenatly I was one of those poor sods that had to attended the PDC via streaming on channel9 :) which means I have to wait a little longer (if by anychance anyone at Microsoft reads this feel free to help speed ut that process).

Monday, November 10, 2008

Windows Azure an introduction and what will it mean to the corporate developer? (Part 2)

The previous post about the Azure Platform was more along the positive vibe so this time I tought we would have a go at the negative stuff, or atleast with some of the concerns buzzing about around in my head. I am going about this from a perspective in how this could benefit the LOB applications that we produce where I work and the problems that we might occur whilst trying to incorporate the Azure Platform into our overall architecture, that said lets get down to business.

So when it comes to outsourcing in general (which applies in both SaaS and PaaS scenarios) we are confronted by the issues concerning trust. We need to have a really trusting relationship with our partners to be able to put parts of our business in their hands. Basically the trust issues boils down to two things as I see it Data and Availability.

Data, the concern here revolves around data ownership if we lock into on vendors storage solution it will become very difficult to move that data to another provider at a later point in time (I doubt that we will get much help from the vendors in a migration effort). Also regulatory issues are a big concern for many companies.

Availability, altough one has to remember that no system can really in practice garantuee 100% uptime there are simply to many fault factors in the equation. However I see one major difference which is that we are in control when a failure occurs on-premise (or atleast we like to think we are). Wether or not we actual are in control we do control the triage process of how we should go about to resolve the problem. I pretty sure that this procces will look completly different when left in the hands of a service provider (I'm guessing the ammount of dollars spent will affect the priorty you get and there is nothing wrong with that just simple economics).

Below you can find a few links concerning availability issues from the current players such as Amazon, Google and Salesforce and I am worried that we will see the likes for Microsoft as well once the load starts to increase for them:

Amazon EC2 & S3

Amazon Web Services Gets Another Hiccup
Amazon's S3 utility goes down
Google App Engine

Google's App Engine Breaks Down
Google explains why App Engine failed
Salesforce

Salesforce.com's hiccups
Salesforce.com down…again
Another issue that I think is even more important is the fact that in a multi-tenant environment we have issues with things such as resource exhaustion (it only takes one bad apple to spoil the bunch) in the Azure Platform they intend to tackle this with a configuration model that specifies things such as intended cpu load and average response times, using this information they will automate the process of scaling out the application when needed. However I still don't see this handling the poor suckers that end up on a machine with a bad app!

Finally moving into the cloud will have some effect on the way we write applications, many of the aspects we know and love from writing scalable solutions on-premise just gets even more critical. Things such as stateless execution and node affinity (or rather the lack of) will be absolutly nessecary to be able to handle provisioning when a catastrophic failure occurs. Upgrading your application will become much more difficult since you'll have to build your applications in a way that they have no downtime, therefore we have to be both forward and backward compatible in both the interfaces, implementation and storage schemas (and belive me if you don't have any experience in this area, it is hard to not break anything).

All in all I'm looking forward to tackle these issues in more detail when I get back from my paternety leave, altough I expect that I will not be able to drop the issue entirely before that and so I might write something more along the lines on what I think around the design considerations for using the Azure Platform in conjuction with an LOB appliction running on-premise.

Next part in this series will look abit more at the details concerning Windows Azure followed by the final installment taking a little closer look at the building blocks closest to my heart .NET Services and SQL Services.

Sunday, November 9, 2008

TFS Power Tools October 2008 are available for download

The October release of TFS Power Tools are now available for download here unfourtunately the are abit delayed since it's already November :)

You can read more about it in this post by Brian Harry.

My personal favourite this time is the Team Member feature which lets you interact with your team members straight from within the Visual Studio IDE. You can also do things such as viewing theire checkin history, shelvesets and pending work.

Windows Azure an introduction and what will it mean to the corporate developer? (Part 1)

I'm sitting here with a beer and listening to some nice rock music, the family is sound asleep and I figure I'd might as well start writing abit about my thoughts on what was the major theme of the PDC08 last week ... namely Azure Services Platform ... This post is probably gonna end up being a multipart posting since there is simple so much exciting new stuff to talk about in the and how it will change the way we write software on the microsoft platform.

The last couple of years there has been alot of discussion about delivering "Software as a Service" (SaaS), we have seen some successful attempts at this the most noteworthy is surely SalesForce delivering CRM software as a service in the cloud. More recently we have seen an evolvment of this into "Platform as a Service" (PaaS) being pioneered by Amazon with their EC2 (Elastic Cloud Cloud) and S3 (Simple Storage Service).

Update 2008-11-21: After being looking around alot more at the various cloud offerings out there and in respect to the comment below, I feel I should clarify my previous statement about Amazon EC2. It is ofcourse a IaaS that in it self is part of the Amazon Web Services (AWS) that is more correct to be calling a PaaS offering.

Well anyway last monday (October 27th 2008) Ray Ozzie announced the Azure Services Platform which is Microsofts step into the cloud computing arena.


As you can see from the picture the platform consists of four major parts where Windows Azure is the core component that everything else builds upon, so what exactly is Windows Azure then?


It's an operating system for the cloud. Normally when talking about the could we are talking about the internet but really Windows Azure is not limited to that, we could just as easily apply the technology behind Azure on any larger datacenter. It's all about efficient management of resouces and global scalability and reach. Apart from a hosting environment for our applications we get a new highly scalable storage system and a set of building block services:

.NET Services (previously known as BizTalk Services), will provide us which things such as federated acces control, an hosting environment for our windows workflows and last but not least a servicebus for the internet which will play an important role in what we can do in regards to integration between companies.

SQL Services (previously known as SQL Server Data Services), will provide us with a database in the cloud. Initially the offerings are rather limited and not that different from whats offered via the Azure storage system but we will get more and more capabilities here.

Live Services, these are no newcommers you get your basic stuff such as Live ID and Live Contacts. The really intressteing piece here though apart from Live ID is probably the newcommer Live Mesh which will provide a syncronization platform for syncronizing data between all your devices.

The platform also contains more traditional SaaS offerings such as Sharepoint Services and Dynamic Services. Well enough with the details (there will be plenty more of those in furture pats of this posting) and lets get down to what this can mean for people developing software. As I see it there a atleast three different users of this platform:

Upstarts

This is probably when a platform such as this really shines. Imagine all the creative people that can realize their ideas with just having to invest the time for realizing the code. We can skip the part where we have to build our own datacenter and staff it with expensive and hard to find people. Or once your in business you do that really expensive superbowl ad and get swamped with customer the next day, instead of building a datacenter for the worst case scenario we can just turn a knob and get some more juice ... you just gotta love it!

Small to Midsize companies

Similar to the upstar company we do not have to build up an huge it department for getting our it infrastructure in place we will just pay as we go. The main difference here is that we are most likely not going to have the need for the huge scale that the next facebook would need, in this segment it's all about TCO and operational cost as opposed to expenditure costs.

Corporations

All the stuff we see for the midsize company applies here as well but I think there are some intressting scenarios here for producing hybrid solutions not only just putting parts of the application in the cloud. But maybe there are some possiblities for put load into the cloud based on expected flash load but still mantaining control of the application on premise (there will be several challenges with this and I will post more as I try this in real life).

Another really intressting idea is the fact that we will have more oppertunety to actually try out ideas since they will not incur the heavy expenditure costs for setting up an operational environment and thus we will be able to produce a real working application as a prof of concept which we then can bring inhouse if needed.

So when should we expect all these goodies? Microsoft are talking about a comercial release some time during 2009 which will include multiple datacenter with global distribution.

For more detailed information regarding "Azure" check out the following PDC08 sessions:

Or if you are short on time read the following articles to get a quick overview of what it's all about:

Another really cool thing about the whole Azure platform is that microsoft is vigliant about this being a cross platform environment there is plenty of talk about the ability to host other languages than .NET in future version, but we already have accessibility from Ruby (.NET Services for Ruby) and Java (.NET Services for Java).

Finally be sure to check out Googles App Enginge if you are serious about learning more about cloud computing. See you in the next part of this post when I'll talk about my concerns in regards to how this will play out for a corporate application architect.

Thursday, November 6, 2008

Papa's Got a Brand New Bag ... A quick look at Windows Workflow 4.0

WF 4.0 gives us a completely rewritten workflow engine! Personally I find it a little scary when Microsoft shifts a product around in this fashion fortunately the changes they are making are really promising and might be just what is needed to get the adoption of WF to really take off.

So what are we getting with this rewrite then?

* Fully declarative model it is now possible to write workflows totally composed using XAML.
* We get a new activity execution model that enables activities to have variables for storing data and arguments for passing data in and out from the activity. Basically it looks very much like a regular function signature with possiblity to have locally scooped variables within the function body (altough these variables are visible when walking down the parent/child chain sort of scoped global variables).
* Flowchart based workflows, which lets us get around some of the limitations when doing sequential workflow such as going back in the workflow after something had occured. Ofcourse this was possible to do using a statemachine workflow but not at all as cleanly as a flowchart would do it.
* The re-hosting of the workflow designer has gotten an real overhaul and gone form previously being a major undertaking guided by a 20+ page document to being a 4 lines of code experience.
* Totally rewritten WPF designer.
* Major performance improvments.
This is by no means all the stuff available in WF 4.0 but I'll have to get back with further postings after actually having spent some time with the bits. One thing that really bugs me though is the backward compatibilities with previous versions of WF, I'm worried that we will be subject to having to port our code by hand if we have not limited our workflows to strictly XAML based workflows (which isn't that easy to do in the present version).

If you want to get some more details on what's comming check out these session recordings from PDC08:

WF 4.0: A First Look
WF 4.0: Extending with Custom Activities
WCF 4.0: Building WCF Services with WF in Microsoft .NET 4.0

Wednesday, November 5, 2008

DropBox an alternative to Live Mesh

A while back when Microsoft announce there Live Mesh services I was really excited until I signed up for a Beta account and where informed about the sad fact that once again we poor swedes have to wait since it's a US only thing for starters...

Anyway I stumble upon an alternative called DropBox which offers a similar service for syncing and sharing files. It works in an hetrogenous environment with clients for Windows, Mac and Linux. You get 2GB for free and then you can upgrade to 50GB for $99/year.

However it is not a full replacement for Live Mesh since at the moment they have no public API (altough they are hinting that there will be one soon). and DropBox lacks support for mobile devices at the moment. Live Mesh also has the whole live desktop experience and deep integration with the other services in the live family.

Creating amazing presentations using a zoomable canvas (pptPlex)

I've been focusing more and more on how to create efficient and goodlooking presentations last year and recently I stumbled upon a really amazing addin called pptPlex which lets you create a zoomable canvas for your presentation.

The addin comes from the "Office Labs" team at Microsoft and lets you create a presentation where you can have an intelligent background that presents the bulk of you slides in an intelligent way giving the audience an overview of what the talk is about and then we can start zooming in to the various sections.

These presentations can also be very interactive since it becomes very easy to quickly jump between sections in your presentation without having to break out from presentation mode. Another really neat thing about this is that we can zoom in on the stuff in our slides so if we are presenting charts and such that are high on detail a quick mouseclick will let you blow up the numbers on the screen.

You could also use this technique to load up your presentation with all the esoteric stuff that you might or might not need and shove them away in a corner of the canvas, then if a question arises you quickly zoom in and bring up a slide about it.

If you want to see more of these kinds of presentations be sure to google on TouchWall which is multitouch based presentation screen which lets you do really cool presentations (altough the are not yet available to the average mortal) you can view a demo made by Bill Gates ealier this year here.

Monday, November 3, 2008

Finally a application server for .NET (Codename "Dublin")

I've been waiting for this since the day Microsoft announced .NET without providing a .NET specific host environment. We have been left to host our components in COM+ for several years now, this has work ok but in my personal opinion this has lead to a to tight tie into a technology that had been declared as a legacy technology. This fact has at least for us lead to a slower adoption pace of .NET than we would have had liked.

Anyway enough with the history let look forward. But before we do this lets resolve any issues concerning BizTalk. Microsoft are very clearly stating that "Dublin" is NOT a BizTalk Killer... BizTalk is still Microsoft solution for integration and will continue to be release on a bi-annual schedule as it looks now. Also there are no plans to add functionality to "Dublin" for rich transformations like BizTalk is capable of.

Dan Eshner held a very good session at the PDC08 called "Dublin": Hosting and Managing Workflows and Services in Windows Application Server about how "Dublin" works, I've taken the liberty of using some of the pictures in his slides. "Dublin" or Windows Application Server Extensions as which is the current official name, is a set of extensions that builds ontop of Windows Process Activation Services (WPAS/WAS) which lets us host both workflows (WF) and services (WCF).

The "Dublin" mantra is ... IT JUST WORKS ... and I must say that the stuff we saw at the PDC08 looks promising. One really neat feature that aligns very well with this ambition is the import/export feature, which lets us deploy our binaries along with the correct configuration with a simple click. Under the covers this feature uses a new tool which is already in Beta 2 called MSDeploy you can read more about this tool at the MSDeploy team blog.



As you can see in the picture above "Dublin" consists of:

A runtime database storing all the configuration and data concerning durable services as well as tracking and monitoring.
A management api built using powershell commandlets which makes it very neat to use in regards to operationa task since we can very easily script very complex scenarios. We also get a nice set of management tools that utilize these commandlets build into the IIS management console.
A set of services in the middle consisting of:
Hosting, a part from dealing with the actual hosting of the workflows and services we will get support for discovery protocols and an service that will look for orphaned service calls and restart them if a catastrophic failure occurs (I'm guessing this is a config option since it will require some design considerations when implementing the service in question for instance we would need to deal with the fact that the service needs to be restartable and can't leave partially finnished work).
Persistance, we get a new and improved persistance provider for our workflows which now is cluster aware. So we can have multiple boxes handling the same queues without stomping all over each other.
Monitoring, we get support for montitoring and tracing both workflows (WF) and services (WCF).
Messaging, this is supercool we get a built in forwarding service which lets us do things such as routing based on data in the message (much like the parameter propagation in BizTalk) and we also get support for message correlation based on data within the message payload.
Another cool management feature is the support for persisted instances which is very similar to the way BizTalk manages this for example we can view persisted instances that has failed grouped by exception and much more (see the webcast by Steven W. Thomas mentioned below for more details on how this works).

Nothing in the presentations at the PDC08 talked about build in declarative transactional support such as we are use to in the COM+ environment, however Dan Eschner confirmed in the Q&A that it is on the roadmap for the product but not in v1.

There is also integration with "Oslo" modelling initiative which will enable us to model our service configurations using M and then deploy them directly to "Dublin".

So when will we see this as RTM? At PDC08 they talked about being released about 3 months after Visual Studio 2010 which in turn has been indicated to be appearing around the end of 2009 (which would correspond nicely with how Microsoft has released VS previously) however these dates are purely speculative. It will be released as download for Windows Vista, Windows 2008 and Windows 7 and it will be a part of the operating system in future versions.

For more information take a peek at:

Steven W. Thomas of biztalkgurus.com has produced a webcast which will guide you through how the "Dublin" management extensions in IIS manager will look like.
First Look at Windows Application Server (Dublin)
As always David Chappell has written a good overview of "Dublin" and a bunch of related stuff (this is a very quick introduction to the technologies it doesn't go very deep).
Workflows, Services, and Models -- A First Look at WF 4.0, “Dublin”, and “Oslo”
Finally I found a rather good FAQ like document at Microsoft which gives some more insights in what going on with "Dublin" in conjunction with .NET 4.0 (also a small document).
Riding the Next Platform Wave: Building and Managing Composite Applications

I hope this has given some insights into "Dublin" and look for further postings in the future since this is an area which I intend to dig deeper into.

Thursday, October 30, 2008

Now we are really talking... VSTS 2010 promises major improvements in software quality! (Part 3)

Back again with the last installment in this blog post going through the big changes in VSTS 2010. Let's get on with the show. I know that I already covered most of the changes in the test deparment in the first part but after watching the session about Team Lab I figured that it was worth a few more rows you can find the session here.

Project Management

There are so many new features here that I wish I had a year ago when starting to adopt the project management parts of TFS at our company (which has been a bumpy ride). The fact that we are getting heirachical work items has been known for a while now but the demos shown at PDC still made me abit warm inside :) ...

It all boils down to the linking stuff where we now have support for both parent/child and predessor/successor relationships we can even create our own if we like, this also means that we have full support for Microsoft Project plans now.

The work item queries have gotten some new stuff as well. A small but nifty feature is the query folders where we can start to group stuff and put permissions on those groups this will make it easier to find the queries for our team members and not get lost in all the noise in the query list.

But a far more important change here is the fact that we can now query based on links and we can put conditions on both the right and left hand sides of the link as well as the link itself. This will enable queries such as give me all work items that still have no test cases assigned to them to give one example. We have some new filtering possiblities such as in group and even though it's not in the latest CTP we will eventually get the possibility to filter based on other fields and not just on constants as today, an example of this could be give me alll tasks where completed time ís greater than estimated time.

We have some new controls for usage within the proccess templates such as a really cool links control that can display and manage links on your work items and also we get rich text support (altough the only thing that get me going here is the fact that we get url support automatically when typing).

We have gotten some major improvements in the integration with Excel 2007, we now have a ribbon control for accessing our team features and the workbooks we produce containing TFS data will support conditional formating that actual sticks when refreshing the TFS data, we even get to include our own columns with data in the spreadsheet if we want. Another cool Excel feature is the fact that we can handle heirarchical structures as well.

We also have a new Excel workbook that ships with the product that helps out with the day to day stuff needed in an project (it's aimed at agile project management but has a lot of value for any one planning project resources) such as information about the iteration backlog and capacity planning of the assigned resources we quickly can see how the load is balanced between our co-workers in the project. A burndown chart is also included here there are more features in this workbook but I'll cover them later in a more detailed post once I have gotten around to play with the bits abit.

Something that is really really cool though is the support for basing a Excel report on a WIT query that you can play around with to your hearts content. Now we don't have to involve a developer to get a report done (altough we developers are still valuable when it comes to more complex reports...)

The project dashboard/portal has been totally revamp but this is not included in the CTP bits.

Lab Management

Is a new SKU in the Team System family which focuses on providing a way to manage your test servers in an efficient fashion provided that you run them using virtualization.

The product will consist of agents on the lab machines and a controller service running probably on the application tier. The user interface is hosted in the brand new WPF based client for test management codenamed "Cameo".

Basically what this drills down to is a virtual machine manager of sorts that has been integrated into the workflow of the ALM process. We can create libraries of virtual machine and then create test labs based on these images that can easily be handled when testing and reporting bugs in conjution with actually using the same environments for bug reproduction.

Once we have an lab inplace we can star with the really cool stuff. When we report a bug from "Cameo" we have the possiblity to include a link of the lab environment in the bug report based on a snapshot that we create in conjuction with reporting the bug. The developer then loads up the bug in VSTS and clicks on a link to the environment which will bring up some options. We can either revert to the snapshot (this will affect the state of the lab for the testers as well) or we can connect as is. In the final product we are to see another option that will enble us to create a copy of the lab that we as a developer can reproduce and debug without actually disturbing the testers (this will be achived with some sort of network fencing technique that will allow for multiple machines with the same name and ip to run simultaneously).

Altough it look very nice the product has a really step pre-requite list and requires a lot of commitment (you will need to have someone being in charge of lab management):

System Center Operations Manager
System Center Virtual Machine Manager
Windows 2008 running Hyper-V or VMWare ESX Server
Also it will require vast ammounts of storage space due to the ammount of snapshot and copies that will be generated.

I am a bit dissapointed that Microsoft hasn't yet included some sort of deployment engine yet but I expect that we are going to see this in the future as well since this will really complete the package. In the presentations at the PDC we only get some new activites for restoring and snapshooting the lab environment the rest is left to xcopy deployment which just doesn't cut it in the real world.

You can expect future post regarding this product form me since it lays close at heart in our efforts at my company with how we are automating our lab management.

Wednesday, October 29, 2008

Now we are really talking... VSTS 2010 promises major improvements in software quality! (Part 2)

Supercalifragilisticexpialidocious... I just got back home from watching the musical Mary Poppins at the local opera with my teenage kids. It was a real treat, the performances where outstanding and the play itself leaves you with a really good vibe that anything is possible if you just set your mind to it.

Anyways previously this day I spent my time divided equally between following the PDC online and taking the middle son to the doctor and feeding the whole family :) ... I manage to watch two talks on VSTS 2010 the one on TFS by Brian Harry and one on agile development using TFS by Lori Lamkin and boy are we in for a treat. Yesterday I blogged about the news in the Testing and Architecture areas, today it's time for build automation, parallel development and project management.

Build Automation

Going from TFS 2005 to TFS 2008 we go a more or less rewritten build automation system and it seems that this is happening again, what is happening this time is that the build script is now a windows workflow which allows us to do all sorts of stuff such as parallel activities and much much more. Apart from the fact we get a graphical view of the actual build script and all the strengths brought to the table with workflow I'm abit worried how this will affect people like myself who have invested heavily in the msbuild based build projects (I'll get back to this topic in future postings when I have had more time to play around with the bits).

Another cool new feature is the build agent controllers that lets you pool build agents so that you do not have to dedicate a particular machine to a build project but you rather have a bunch of them serving you ondemand. A very cool feature here is the ability to tag the agents and then perform conditional evaluations based on these tags in your build agent selection process.

Also if you are a fan of continuous intergration you are going to love the feature called gated checkins which lets you configure a build trigger to occur prior to actually commiting the changeset to version control and if the build breaks the changes will be prevented from making it into the currently stable branch. You could compare them to optimistic and pessimistic locking strategies CI being optimistic and GC being pessimistic.

Prior to VSTS 2010 we have had to endure a rather messy build report altough it was complete and detailed it was a pain to work with, we have now gotten a completely rewritten report that include features such as a minimap with errors and warnings highlighted to quickly travese the vast ammount of data in the log. The summary section quickly gives you the details on any errors and warnings no more digging through the log files for that. The histogram over the last few build at the top is also really neat it gives you a quick input on what state the build has been in and how long you should expect to be waiting for completion.

Finally we have a new buddy report to the Build Quality Indicator which is called Build Success Overtime this gives us a nice heathmap over the build status for the last month this report could be really usefully as an information radiator upon a flatscreen in a project room I think.

Parallel Development

When working with branching and merging in TFS there has been some challenges in knowing the exact state on things which has lead to the fact that people are vary of using forward / reverse integration patterns for enabling a good environment for parallel development.

Now we have gotten a whole slew of new features to remedy this fact. First we have rollback incorporated into the gui and we have conflict during the merge process the conflict resolution is no longer model, you will instead find it incorporated into the pending changes window this will save tons of time when tracking down problems in merge conflicts.

A branch has gotten some elevated properties in the source controll explorer now it is not just a folder anymore. We have a specific icon to visually indicate that it is of the type branch and then we a some properties such as the possibility to create an description and an assign an owner.

The anotate feature which is great now even got greater, now we will not just se that a merge brought a change into the file in our branch but we will rather get the exact changeset information that the changes originated from even though they where not performed in our current branch. This is great stuff!

The final feature I want to point out I actually intended to write and implementation for myself (I still have the code for it so I might decide to dust it off and package it up for use in TFS 2008) although the fancy graphics would have been some what challenged in comparision to what we will get in VSTS 2010. The feature I'm blabbing about is the new branch visualization available through the show history and track changeset actions. Now you will get an hierarchical view of the changesets and their relationships including the full path information directly in the query results. And the track changeset will give you the possibility to visualize both from a timeline and organisational view of the changeset. The timeline view will show you which branches has incorporated a changeset and when and the organisational view will let you see the parent/child relationship along with the direction that the changes has travelled between the branches.

An super nice feature in these new branch visualisations is the fact that if a changeset is missing from a particular branch we can simply drag and drop it onto the branch we want to incorporate it in and viola it will trigger a merge.

...Phew... this post is starting to get a bit longish and I am starting to get tired so I'll continue with the project management stuff tomorrow take care until then!

Tuesday, October 28, 2008

Now we are really talking... VSTS 2010 promises major improvements in software quality! (Part 1)

It's been awhile since I last posted :) ... I had a silly notion about keeping blogging during my parental leave but it didn't really turn out that way. I have been spending the last 4 months with my kids and actually managed to read a bunch of non technical books which was very nice and relaxing.

Anyway let's get on with the real post, I figured that I'd start following the developments on VSTS 2010 again now when the newest CTP was released in conjuction with Microsoft PDC08 (the only drawback with being on parental leave gotta be the fact that I was unable to attend this conference).

I'm really excited about some of the new features in VSTS which are promising a really huge leap in the ability to produce high quality software if applied correctly. The current release is focusing heavly on the Architecture and Test Editions of the product.

ARCHITECTURE EDITON

I have never really bothered with this edition of VSTS before it has brought way to little to the table in my taste historically. But boy have I changed my mind about this I'm thrilled about the positbilitied with using the new Layer Diagram to perform validation of architecture compliance during the build process this is really nice. Also the layer diagram is something that most of us produces anyway so it's good to see that we can use it for something else than presentations of the conceptual architecture.

Also the architecture explorer which let's visualize dependencies between namespaces and classes looks very nice and will give us a nice way to investigate as well as produce documentation regarding the dependencies of our solutions. The visualizations looks very nice and they even carry semantics with them, for instance if there is a heavy dependency between two namespaces the line illustrating the link is thicker than is the dependency just concerns a class or two. The links lets you we information about the dependecy and navigate to the code that is causing the dependency by clicking and drilling down in the diagrams.

The fact that Microsoft has joined the OMG and finally included UML support in the product is also nice although what I'm really excited about here is the feature that let's put the cursor in a method and then say generate sequence diagram and viola VSTS will parse the code and produce a diagram for us. We can even filter the pasing by specifying the call depth and excluding namespace that we don't care about. I guessing that we can save alot of time we looking for bugs and trying to improve our codebase using this feature.

Finally if I understand this correctly we can produce these diagrams using the Architecture SKU and they will be read only in all the other SKU's which will make them really usefull in illustrating problem areas in the code.

TEST EDITION

The manual testing parts of VSTS has not really been up to speed with their competitors such as HP Quality Center, but in this release this is changing when we get a totally new application called Cameo that our testers can use to plan and perform their tests. The really neat thing about this is that we as developers will get a slew of new information in conjuction with the bug report. Microsoft is aiming at eliminating the No Repro effect (or as it's also known "hey it works on my machine"). Some of this information that we get is a video recording of the actual test run, system information concerning service packs and such and a historical debugger log (if you haven't looked at this you have to get a peak at it, you get a black box flight recording of what your application did before the failure which you can playback after the fact).

When we find and fix our bugs or simply make a change to our code we have a new feature called Test Impact Analysis which will analyse the changes in the background and produce a list of impacted tests so that you don't have to run all your tests for a small change this can improve productivity abit but I think it will really shine when you start to consider your changes and their impact on the test suite (provided that you have a fairly good coverage of your code), say for instance that you are changing a function and your test impact analysis goes of the chart and give you a list of thousands of test, you might want to consider getting a second oppinion on your change and bring in someone more senior on the change.

Another really nice feature that we get is the concept of coded UI tests which in it self is nothing new (and I'm guessing that we will have some v1 issues here also) but what is really exciting is the fact that the test code produced to drive our is regular .NET code so it can be C# or VB.NET finally we get ride of all those nasty script based tests (as far as I know this is unique in the marketplace). Another nice thing about the coded UI test is that they build upon the existing unit test framework which we already know and love (atleast I do) all in all it looks really nice.

... It's getting late and there are a ton of new features that I haven't talked about so I guess I'll have to get back to you later with that.

You can get the bits from the new Visual Studio 2010 and .NET Framework 4.0 CTP Feedback site or you can grab it using FDM following the instructions in Brain Kellers post A More Reliable and Faster Download Experience for VS2010/ VS08 VPC's

Some links getting you upto speed on the comming features:

Be sure to checkout Cameron Skinners presentation from PDC08 in LA which is available online

Radio TFS is currently running a series of podcast titled Road To Rosario which I recommend that you listen to
Road To Rosario - Architect
Road To Rosario - Test
Road To Rosario - Developer

Finally Channel9 had a special on Visual Studio Team System 2010 you can find more info about it here

Tuesday, February 12, 2008

TFS 2005 to 2008 Upgrade Experience

I've been meaning to write something about this for awhile now. Friday the 11th of January we migrated our exsisting TFS environment from TFS 2005 to 2008. All in all it was a rather painless transition, the bumps we ran into where related to the fact that we are running our TFS installation on a different port than 8080.

The first issue though was an typo in the installation help. We figured it would be best to follow the latest version of the documentation. Before we started our upgrade we spotted that the documentation stated that you needed to remove TFS from the application tier on a single server installation which was not the case in the original documentation on the media. After some asking around we managed to come to the conclusion that this was only needed on the database tier in a dual server installation as the original instructions stated.

Even though we had performed numerous test upgrades we managed to forget the fact that we did not run on a different port in the test environment. The upgrade does not picked up the configuration settings from the previous installation and thus it tried to communicate with TFS on the default port causing it to fail miserably. At this time we had spent about 3 hours and where getting abit worried if we where going to have to perform an emergency restore.

So we gave it another shot and unpacked the image onto the local harddrive and modified the port settings in the msiproperty.ini (you will find it under the AT folder). Fortunately the upgrade of the database is performed transactionally and thus the database upgrade was rollback and we where able to perform that step again.

The fact that the database upgrade is quite a large operation is something you need to consider when upgrading since it consumes alot of diskspace. If you have a 15GB database you should expect atleast the same ammount to acumulate during the upgrade in your transaction log of your database. So make sure you have plenty of disk avaiable where ever you store your transation logs.

After we changed the port settings and ran again it took ages to perform the upgrade however it managed to get through it all and I where actually considering killing the upgrade and restoring to TFS 2005 but we started a database trace against the databases and managed to verify certain activity and let the upgrade continue. After about 6 hours we had managed to successfully upgrade our TFS installation.

After a month have passed we have still not encountered any major issues with the 2008 version and everything is running along smoothly. I will be posting some from the trenches posts on any issues we bump into.