Review of a Trading System Project


In late 2007 I wrote a series of articles on Microsoft’s Composite Application Block (CAB).  At that time I was running a team that was developing a user interface framework that used the CAB.

We’re now four years on and that framework is widely used throughout our department.  There are currently modules from eleven different development teams in production.  There are modules that do trade booking, trade management, risk management, including real-time risk management, curve marking, other market data management, and so on.  All of those were written by different teams, yet it appears to the user that this is one application.

This article will look back at the goals, design decisions, and implementation history of the project.  It will look at what we did right, what we did wrong, and some of the limitations of the CAB itself (which apply equally to its successor, Prism).

The framework is a success.  However, it’s only a qualified success.  Ironically, as we shall see, it is only a qualified success because it has been so successful.  To put that less cryptically: many of the problems with the framework have only arisen because it’s been so widely adopted.

Hopefully this article will be of interest.  It isn’t the kind of thing I usually write about and will of course be a personal view:  I’m not going to pretend I’m totally unbiased.

Design Goals

Original Overall Goals

The project had two very simple goals originally:

  1. A single client application that a user (in this case, a trader) would use for everything they need to do.
  2. Multiple development teams able to easily contribute to this application, working independently of each other.

I suspect these are the aims of most CAB or Prism projects.

Do You Actually Need a Single Client Application?

An obvious question arising from these goals is why you would need an application of this kind.

Historically there have tended to be two approaches to building big and complex trading applications:

  1. The IT department will create one huge monolithic application.  One large development team will build it all.
  2. The IT department breaks the problem up and assigns smaller development teams to develop separate applications to do each part.  This is a much more common approach than 1/.

Both of these approaches work, and both mean you don’t need a client application of the kind we are discussing.  However, neither of these approaches works very well:

  • Monolithic applications quickly become difficult to maintain and difficult to release without major regression testing.
  • Equally users don’t like having to log into many different applications.  This is particularly true if those applications are built by the same department but all behave in different ways.  It can also be difficult to make separate applications communicate with each other, or share data, in a sensible way.

So there definitely is a case for trying to create something that fulfils our original design goals above and avoids these problems.   Having said that it’s clearly more important to actually deliver the underlying functionality.  If it’s in several separate applications that matters less than failing to deliver it altogether.

More Detailed Goals

For our project we also had some more detailed goals:

  • Ease of use for the developer.  I have personally been compelled to use some very unpleasant user interface frameworks and was keen that this should not be another one of those.
  • A standardized look and feel.  The user should feel this was one application, not several applications glued together in one window.
  • Standard re-usable components, in particular a standard grid and standard user controls.  The user controls should include such things as typeahead counterparty lookups, book lookups, and security lookups based on the organization’s standard repositories for this data.  That is, they should include business functionality.
  • Simple security (authentication and authorization) based on corporate standards.
  • Simple configuration, including saving user settings and layouts.
  • Simple deployment.  This should include individual development teams being able to deploy independently of other teams.

As I’ll discuss, it was some of the things that we left off that list came to back to haunt us later on.

Goals re Serverside Communication

A further goal was use of our strategic architecture serverside, in particular for trade management.  For example, we wanted components that would construct and send messages to our servers in a standard way.  I won’t discuss the success or failure of this goal in detail here as it’s a long and chequered story, and not strictly relevant to the CAB and the user interface framework.

Technical Design

Technical Design: Technologies

The technologies we used to build this application were:

  • Microsoft C# and Windows Forms
  • Microsoft’s Patterns and Practices Group’s Composite Application Block (the CAB)
  • DevExpress’ component suite
  • Tibco EMS and Gemstone’s Gemfire for serverside communication and caching

As I’ve already discussed, this document is going to focus purely on the clientside development.

In 2007 these were logical choices for a project of this kind.  I’ll discuss some of the more detailed design decisions in the sections below.

Things We Did (Fairly) Well

As I said this is a personal view: I’m not sure all our developers would agree that all of this was done well.

Ease of Use

Designing for ease of use is, of course, quite difficult.  We have done a number of things to make the project easy to use, some of which I’ll expand on below.  These include:

  • Developers write vanilla user controls.  There’s no need to implement special interfaces, inherit from base classes or use any complex design pattern.
  • Almost all core functionality is accessed through simple services that the developer just gets hold of and calls.  So for example to show your user control you get an instance of the menu service and call a show method.  We used singleton service locators so the services could be accessed without resorting to CAB dependency injection.
  • Good documentation freely available on a wiki
  • A standard onboarding process for new teams, including setting up a template module.  This module has a ‘hello world’ screen that shows the use of the menus and other basic functionality.

Developers Not Forced to Learn the Composite Application Block (CAB)

As mentioned above, one of the key goals of the project was simplicity of use.  The CAB is far from simple to use: I wrote a 25 part introductory blog article on it and still hadn’t covered it all.

As a result we took the decision early on that developers would not be compelled to use the CAB actually within their modules.  We were keen that developers would not have to learn the intricacies of the CAB, and in particular would not have to use the CAB’s rather clunky dependency injection in their code.

However, obviously we were using the CAB in our core framework.  This made it difficult to isolate our developers from the CAB completely:

  • As mentioned above we exposed functionality to the developers through CAB services.  However we gave them a simple service locator so they didn’t have to know anything about the CAB to use these services.
  • We also used some CAB events that developers would need to sink.  However since this involves decorating a public method with an attribute we didn’t think this was too difficult.

As already mentioned, to facilitate this we wrote a ‘template’ module, and documentation on how to use it.  This was a very simple dummy module that showed how to do all the basics.  In particular it showed what code to write at startup (a couple of standard methods), how to get hold of a service, and how to set up a menu item and associated event.


We realized after a few iterations of the system that we needed a reasonably sophisticated approach to versioning and loading of components.  As a result we wrote an assembly loader.  This:

  • Allows each module to keep its own assemblies in its own folder
  • Allows different modules to use different versions of the same assembly
  • Also allows different modules to explicitly share the same version of an assembly

Our default behaviour is that when loading an assembly that’s not in the root folder, the system checks all module folders for an assembly of that name and loads the latest version found.  This means teams can release interface assemblies without worrying about old versions in other folders.

Versioning of Core Components

For core components clearly there’s some code that has to be used by everyone (e.g. the shell form itself, and menus).  This has to be backwards compatible at each release because we don’t want everyone to have to release simultaneously.  We achieve this through the standard CAB pattern of interface assemblies: module teams only access core code through interfaces that can be extended, but not changed.

However, as mentioned above, the core team also writes control assemblies that aren’t backwards compatible: teams include them in their own module, and can upgrade whenever they want without affecting anyone else.

User Interface Design

For the user interface design, after a couple of iterations we settled on simple docking in the style of Visual Studio.  For this we used Weifen Luo’s excellent docking manager, and wrote a wrapper for it that turned it into a CAB workspace.  For menuing we used the ribbon bars in the DevExpress suite.

The use of docking again keeps things simple for our developers.  We have a menu service with a method to be called that just displays a vanilla user control in a docked (or floating) window.


In large organizations it’s not uncommon for the standard client deployment mechanisms to involve complex processes and technology.  Our organization has this problem.  Early on in this project it was mandated that we would use the standard deployment mechanisms.

We tried hard to wrap our corporate process in a way that made deployment as simple as possible.  To some extent we have succeeded, although we are (inevitably) very far from a simple process.


For configuration (eventually) we used another team’s code that wrapped our centralized configuration system to allow our developers to store configuration data.  This gives us hierarchies of data in a centralized database.  It means you can easily change a setting for all users, groups of users, or an individual user, and can do this without the need for a code release.

Module Interaction

Clientside component interaction is achieved by using the standard CAB mechanisms.  If one team wants to call another team’s code they simply have to get hold of a service in the same way as they do for the core code, and make a method call on an interface.  This works well, and is one advantage of using the CAB.  Of course the service interface has to be versioned and backwards compatible, but this isn’t difficult.


For security we again wrapped our organization’s standard authentication and authorization systems so they could easily be used in our CAB application.  We extended the standard .Net Principal and Identity objects to allow authorization information to be directly accessed, and also allowed this information to be accessed via a security service.

One thing that we didn’t do so well here was the control of authorization permissions.  These have proliferated, and different teams have handled different aspects of this in different ways.  This was in spite of us setting up what we thought was a simple standard way of dealing with the issue.  The result of this is that it’s hard to understand the permissioning just by looking at our permissioning system.

Things We Didn’t Do So Well

As mentioned above, the things that didn’t go so well were largely the things we didn’t focus on in our original list of goals.

Most of these issues are about resource usage on the client.  This list is far from comprehensive: we do have other problems with what we’ve done, of course, but the issues highlighted here are the ones causing the most problems at the time of writing.

The problems included:


The Problem

We decided early on to allow each team to do threading in the way they thought was appropriate, and didn’t provide much guidance on threading.  This was a mistake, for a couple of reasons.

Threading and Exception Handling

The first problem we had with threading was the simple one of background threads throwing exceptions with no exception handler in place.  As I’m sure you know, this is pretty much guaranteed to crash the entire application messily (which in this case means bringing down 11 teams’ code).  Of course it’s easy to fix if you follow some simple guidelines whenever you spawn a background thread.  We have an exception hander that can be hooked up with one line of code and that can deal with appropriate logging and thread marshalling.  We put how to do this, and dire warnings about the consequences of not doing so, in our documentation, but to no avail.  In the end we had highly-paid core developers going through other teams’ code looking for anywhere they spawned a thread and then complaining to their managers if they hadn’t put handlers in.

Complex Threading Models

Several of our teams were used to writing serverside code with complex threading models.  They replicated these clientside, even though most of our traders don’t have anything better than a dual core machine, so any complex threading model in a workstation client is likely to be counterproductive.

Some of these models tend to throw occasional threading exceptions that are unreproducible and close to undebuggable.

What We Should Have Done

In retrospect we should have :

  • Provided some clear guidance for the use of threading in the client.
  • Written some simple threading wrappers and insisted the teams use them, horrible though that is.
  • Insisted that ANY use of threading be checked by the core team (i.e. a developer that knew about user interface threading).  The wrappers would have made it easy for us to check where threads were being spawned incorrectly (and without handlers).

Start Up

The Basic Problem

We have a problem with the startup of the system as well: it’s very slow.

Our standard startup code (in our template module) is very close to the standard SCSF code.  This allows teams to set up services and menu items when the entire application starts and the module is loaded.

This means the module teams have a hook that lets them run code at startup.  The intention here is that you instantiate a class or two, and it should take almost no time.  We didn’t think that teams would start using it to load their data, or start heartbeats, or worse, to fire off a bunch of background threads to load their data.  However, we have all of this in the system.

Of course the reason for this is that the place where this code should actually be is when a user clicks a menu item to load the team’s screen for the first time.  For heartbeats, it’s a little hard to control startup and closedown when a screen opens and closes: it’s much easier to just start your heartbeats when the application starts.  For data loading for a screen, if this is slow it becomes very obvious if it happens when a user requests a screen.

However, the impact of this happening over 11 development teams’ code is that the system is incredibly slow to start, and very very fragile at startup.  It will often spend a couple of minutes showing the splash screen and then keel over with an incomprehensible error message (or none).  As a result most traders keep the system open all the time (including overnight).  But an obvious consequence is that they are very reluctant to restart, even if they have a problem that we know a restart will fix.  Also all machines are rebooted at the weekend in our organization, so they have to sit through the application startup on a Monday morning in any case.

One further problem is that no individual team has any incentive to improve their startup speed: it’s just a big pool of slowness and you can’t tell if module X is much slower than module Y as a user.  If any one team moves to proper service creation at startup it won’t have a huge overall effect.  We have 11 teams and probably no one team contributes more than a couple of minutes to the overall startup.  It’s the cumulative effect that’s the problem.

What We Should Have Done

This is one area where we should just have policed what was going on better, and been very firm about what is and is not allowed to be run at startup.  At one stage I proposed fixing the problem by banning ANY module team’s code from running at startup, and I think if I were to build an application of this kind again then that’s what I’d do.  However, clearly a module has to be able to set up its menu items at startup (or the user won’t be able to run anything).  So we’d have to develop a way of doing this via config for this to work, which would be ugly.

One other thing that would really help would be the ability to restart an individual module without restarting the entire system.

Memory Usage

The Problem

We effectively have 11 applications running in the same process.  So with memory usage we have similar problems to the startup problems: every team uses as much memory as they think they need, but when you add it all up we can end up with instances of the system using well over 1GB of memory.  On a heavily-loaded trader’s machine this is a disaster: we’ve even had to get another machine for some traders just to run our application.

To be honest, this would be a problem for any complex trading environment.  If we had 11 separate applications doing the same things as ours the problem would probably be worse.

However, as above there’s no incentive for any individual team to address the problem: it’s just a big pool that everyone uses and no-one can see that module X is using 600MB.

What We Should Have Done

Again here better policing would have helped: we should have carefully checked every module’s memory requirements and told teams caching large amounts of data not to.  However, in the end this is a problem that is very hard to avoid: I don’t think many teams are caching huge amounts of data, it’s just there’s a lot of functionality in the client.

One thing that will help here is the move to 64-bit, which is finally happening in our organization.  All our traders have a ceiling of 4GB of memory at present (of which, as you know, over 1GB is used by Windows), so a 1GB application is a real problem.

Use of Other Dependency Injection Frameworks (Spring.Net)

The Problem

One unexpected effect of the decision not to compel teams to use the CAB was that a number of teams decided to use Spring.Net for dependency injection within their modules, rather than using the CAB dependency injection.  I have some sympathy with this decision, and we didn’t stop them.  However, Spring.Net isn’t well-designed for use in a framework of this kind and it did cause a number of problems.

  • The biggest of these is that Spring uses a number of process-wide singletons.  We had difficulties getting them to play nicely with our assembly loading.  This has resulted in everyone currently having to use the same (old) version of Spring.Net, and upgrading being a major exercise.
  • Handling application context across several modules written by different teams proved challenging.
  • If you use XML configuration in Spring.Net (which everyone does) then types in other assemblies are usually referenced using the simple assembly name only.  This invalidated some of our more ambitious assembly loading strategies.
  • The incomprehensibility associated with Spring.Net’s exception messages on initial configuration is made worse when you have multiple modules at startup.

We also had some similar problems re singletons and versioning with the clientside components of our caching technology.  Some code isn’t really compatible with single-process composite applications.

What We Should Have Done

Again we should have policed this better: many of the problems described above are solvable, or could at least have been mitigated by laying down some guidelines early on.

What I’d Change If I Did This Again

The ‘what we should have done’ sections above indicate some of the things I’d change if I am ever responsible for building another framework of this kind.  However, there are two more fundamental (and very different) areas that I would change:

Code Reviews

In the ‘what we should have done’ sections above I’ve frequently mentioned that we should have monitored what was happening in the application more carefully.  The reasons we didn’t were partially due to resourcing, but also to some extent philosophical.  Most of our development teams are of high quality, so we didn’t feel we needed to be carefully monitoring them and telling them what to do.

As you can see from the problems we’ve had, this was a mistake.  We should have identified the issues above early, and then reviewed all code going into production to ensure that there weren’t threading, startup, memory or any other issues.

Multiple Processes

The second thing I’d change is technical.  I now think it’s essential in a project of this kind to have some way of running clientside code in separate processes.  As we’ve seen many of the problems we’ve had have arisen because everything is running in the same process:

  • Exceptions can bring the process down, or poorly-written code can hang it
  • It’s hard to identify how much each module is contributing to memory usage or startup time
  • There’s no way of shutting down and unloading a misbehaving module

I think I’d ideally design a framework that had multiple message loops and gave each team its own process in which they could display their own user interface.  This is tricky, but not impossible to do well.

Note that I’d still write the application as a framework.  I’d make sure the separate processes could communicate with each other easily, and that data could be cached and shared between the processes.

As an aside a couple of alternatives to this are being explored in our organization at present.  The first is to simply break up the application into multiple simpler applications.  The problem with this is that it doesn’t really solve the memory usage or startup time problems, and in fact arguably makes them worse.  The second is to write a framework that has multiple processes but keeps the user interface for all development teams in the same process.  This is obviously easier to do technically than my suggestion above.  However for many of our modules it would require quite a bit of refactoring: we need to split our the user interface code cleanly and run it in a separate process to the rest of the module code.


Extending Classes: Extension Methods or Inheritance?


Extension methods were introduced in C# 3.0 as a way of extending a class without necessarily having access to the original source code of the class.

As discussed in the C# Programming Guide on MSDN, extension methods were primarily introduced to the language to allow LINQ to add standard query operators such as GroupBy and OrderBy to any class that implements IEnumerable<T>.  This is very neat syntactically.

This article will examine how extension methods can be created and used.  It will then show one example where inheritance is probably a better approach to class extension.

Basic Usage of Extension Methods

To illustrate the basic approach to using extension methods as usual a code example is available.

This contains three projects:

  1. A ‘Core’ class library intended to represent code developed by a core group of developers and used as a library by other developers.
  2. A ‘DerivedExtended’ library that is intended to represent code extending the Core library, developed by a second group of developers.
  3. A ‘Client’ library that uses the DerivedExtended library.

Initially the Core library contains just one class with just one public property:

namespace Core
    public class MyCoreClass
        public int CoreValue { get; set; }

How Extension Methods Work: Writing an Extension Method

We can extend our core class in our DerivedExtended library using extension methods.  To do this clearly DerivedExtended has to reference Core.  The syntax is simple to set up an extension method:

using Core;
namespace DerivedExtended
    public static class Extended
        public static void SetCoreValue(this MyCoreClass myCoreClass)
            myCoreClass.CoreValue = 1;

To make this work both the class and the method in it are declared static, and we use the ‘this’ keyword on the myCoreClass parameter.  The name of the static class (‘Extended’) is not relevant for the purposes of the extension method (it’s not used to invoke it in any way).

Our extension method can only access the public interface of the class it is extending.  That is in our case it can access the CoreValue property because it is public: if it were private the extension method would not be able to set it.

How Extension Methods Work: Calling an Extension Method

We now set up client code to use this:

using System;
using DerivedExtended;
namespace Client
    class Program
        static void Main(string[] args)
            // Instantiate the core class
            Core.MyCoreClass myCoreClass = new Core.MyCoreClass();
            // Call extension method
            // Get property value from core class and display it

Note the syntax for calling the SetCoreValue extension method: to the client code it looks exactly like a normal instance method on the myCoreClass instance.  Note also that we have to use ‘using DerivedExtended’ directive for the compiler to find the relevant extension method.

Clearly the code above prints ‘1’ to the console window: the extension method has set the property value as you would expect.

Why This is Fragile

Now imagine our core developers decide that they need a method to set the core value, and that the method should be setting the value to 100.  If they are library developers they may know nothing about the client code and DerivedExtended library written by the second development team.  So they make the simple change below:

namespace Core
    public class MyCoreClass
        public int CoreValue { get; set; }
        public void SetCoreValue()
            myCoreClass.CoreValue = 100;

Now if we run our client code we see that the behaviour has been changed: it prints out 100.  However nothing has ostensibly been broken: our DerivedExtended developers could easily rebuild against the new Core library, not spot anything had changed, and ship the code with changed behaviour.  Of course in my simple example it doesn’t look like the effects of this are likely to be catastrophic.  However because of the extension method the DerivedExtended developers now need to be much more careful when the implementation of the Core library changes.

Extension Methods or Inheritance?

Of course, in object-oriented languages we already have a means of extending a class where we don’t necessarily have access to or want to change the original source code: it’s called inheritance.

In C# and other object-oriented languages there is extensive support for allowing a base class writer to control how their class will be extended by inheritance, such as declaring methods as virtual, declaring member variables as protected and so on.  A number of our core design patterns are based around designing classes that can be extended in this way (e.g. Template Method).

Why This Isn’t a Problem with Inheritance

Now consider the same extension as above being made with inheritance.  Firstly we revert our Core class to simply have the CoreValue property:

namespace Core
    public class MyCoreClass
        public int CoreValue { get; set; }

Then we construct a Derived class that contains a SetCoreValue method that again sets the value to 1.  Thus we extend our code in a new library in the traditional way:

namespace DerivedExtended
    public class Derived : Core.MyCoreClass
        public void SetCoreValue()
            base.CoreValue = 1;

We can now change our client code to use this extension:

    class Program
        static void Main(string[] args)
            // Now consider case where we extend by inheritance
            // Instantiate the Derived class
            Derived derived = new Derived();
            // Call the derived method
            // Get property value from derived class and display it


And clearly here the code will print out ‘1’ again.

Now, as before, assume our Core library developers add a SetCoreValue method to the Core class, unaware that there already is one in the Derived class:

    public class MyCoreClass
        public int CoreValue { get; set; }
        public void SetCoreValue()
            CoreValue = 100;

This doesn’t change the behaviour of the existing client code, which still prints out ‘1’.  However, we get a compiler warning:

‘DerivedExtended.Derived.SetCoreValue()’ hides inherited member ‘Core.MyCoreClass.SetCoreValue()’. Use the new keyword if hiding was intended.

So not only has the code not been broken, but our DerivedExtended developers will get a warning that there is a problem when they come to recompile their code against the new library.

Note also that any code the Core developers write calling SetCoreValue on the Core class will correctly call the Core class version (they don’t know about the Derived class version so can’t want to call it).

This behaviour is deliberate: by default a method in a derived class hides rather than overrides a method with the same signature in a base class if there has been no explicit use of the ‘new’ or ‘virtual/overrides’ keywords.  This is precisely because of the circumstances with regard to extension described here.


So in summary, in the circumstances described where a development team is extending classes in a library it seems better to do this by inheritance than by extension methods.  Obviously it isn’t always possible to extend a class by inheritance and we may need to use other techniques, but in general extension methods seem fragile to me.

To an extent the same argument applies to any class in our code, whether in a library or not.  Suppose we use extension methods and then later on we alter the implementation of the class we are extending.  We can break our extension methods without necessarily noticing we have done so.  Of course this is true of any code that uses the public interface of our class if its behaviour changes, so is less valid as an argument against extension methods.


Microsoft itself agrees with the central argument in this article.  In the page on extension methods in the C# programming guide it says:

General Guidelines

In general, we recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type. For more information, see Inheritance (C# Programming Guide).

When using an extension method to extend a type whose source code you cannot change, you run the risk that a change in the implementation of the type will cause your extension method to break.


Extension methods can be fragile.  My recommendation is that you don’t write your own extension methods, for the reasons explained above.

User Interface Design for Business Applications


This article is going to give a quick tour of the various high-level user interface designs for business applications that need to display multiple windows. It will discuss multiple document interface (MDI), single document interface (SDI) and other paradigms for handling multiple windows. The article will illustrate these concepts by looking at the user interfaces in Microsoft’s various desktop applications, both good and bad.

This article will be referred to by my series of articles on the CAB and SCSF but is not part of that series.

Business Applications

Before we start we need to consider what we mean by business applications. For the purposes of this discussion I mean applications that have some or all of the following characteristics:

  • display of data, often in grid form
  • some means of interrogating the data (querying, sorting, filtering, drilling down)
  • calculations based on the data, comparison of data (reconciliations), and display of results
  • some kind of data entry and persistence
  • data updates from other applications, maybe in real time
  • exports to Excel or paper or other formats
  • feeds to other systems

I work in banking, and would say that all the applications we build fall into this category, with the possible exception of internet-based applications for clients. Such applications may or may not be classic online transaction processing systems (OLTP).

In general these applications do not need to catch the user’s attention in competition with other applications, unlike internet-based consumer applications. As a result they largely will not use multimedia effects (video, animation, graphics). They are intended to be functional above all else.

Having said that such applications often do need to have a rich and responsive user experience. It is still hard to do that with HTML-based interfaces, even with the advent of Ajax. Often these applications are solely for internal use within an organization, where there will be a consistent desktop computer base with appropriate security. As a result deployment of smart client applications becomes possible and these applications are often built as smart clients.

High-level Design of User Interfaces for Business Applications

In this article I’m going to illustrate the various possible high-level designs for business applications using Microsoft’s own selection of applications. Whilst these are not really ‘business applications’ in the sense I have described above, they do illustrate the possible designs.

The difficulty that we are trying to address here is that our business applications will typically need to show many different screens with many different types of data in them. We may want to show more than one screen simultaneously, and may even allow data to be dragged between screens (although it’s arguable that this is one user interface paradigm that should not be used). There usually needs to be some central way of managing the various screens (a window manager), as well as some means of transitioning between them. Above all it’s critical that the user can easily navigate the system, and can use it in a flexible way that fits with their requirements.

Multiple Document Interface (MDI)

The ‘classic’ Multiple Document Interface (MDI) design has been with us for many years now, and at one stage was a very common way of handling the user interface problems described above.

A Multiple Document Interface application has a main ‘shell’ MDI window with just a menu bar and possibly a toolbar. The user can load individual screens from these bars, and the screens will just ‘float’ within the window.

Typically these applications have a ‘Window’ menu option, with ‘Tile’ and ‘Cascade’ options that rearrange all open windows. We also usually have a list of open windows on the ‘Window’ menu to allow us to find an individual screen.

There are several applications of this sort still around, even in the Microsoft stable:



One advantage of this sort of user interface design for enterprise applications is that it can be very easy to develop. Every screen is a form, there’s an easy way of creating one directly from a menu click, and window management is simple. If you cache your data on the client and use a simple model-view-controller pattern you can keep the data on the screens up-to-date relatively easily as well.

Of course the major disadvantage is that the user experience isn’t that good. It’s easy to ‘lose’ windows, and as you can see even in my simple screenshots above, windows overlap and what you want to see is often hidden.

It’s for the reasons outlined above that Microsoft has been phasing out this sort of interface from its own products.

Single Document Interface (SDI)

In a Single Document Interface (SDI) application there is only one window in each instance of the application. If you want a second window you start a second complete instance of the application. Switching between windows happens at the operating system level when you switch between applications. In Microsoft Windows this means you that you use the Taskbar to select a different window, as all windows will have an icon in the taskbar.

Microsoft uses this paradigm for Microsoft Word. If you open two documents in Word you get two separate instances of the application:


This is the default behaviour. It is possible to use Word as a proper ‘classic’ MDI application by changing one if its many options settings: if you click the Office button (top left), click Word Options, go to the Advanced tab and scroll down to the Display section there is a checkbox labelled ‘Show all windows in the Taskbar’. If you clear this Word will have an MDI interface:


Excel behaves strangely with regard to this. It has the same menu option, but its default behaviour with the ‘Show all windows in the Taskbar’ checkbox checked is different. Here if you open two Excel documents they are still MDI (i.e. they appear in the same window), although they do have two window icons in the Taskbar:

If you clear the checkbox all that happens is that you get one icon in the Taskbar instead of two (i.e. there’s almost no difference):


SDI does make some sense for Word and Excel where you will typically only have a few spreadsheets or word processing documents open at a time and may want a way to navigate quickly between them. The operating system Taskbar is ideal for this.

However, this sort of interface is usually not applicable to the complex business applications discussed earlier in the article. These applications will have multiple screens with different data being shown, and trying to manage all of those and their interaction through the Taskbar would be extremely difficult.

For this reason it is rare for business applications to be designed using SDI.

Basic Tabbed Document Interfaces (‘TDI’) – Browsers

Many modern desktop applications now use tabbed document interfaces of some kind. Rather than having multiple floating windows that are difficult to control (MDI), or each window in a separate instance of the application (SDI) we allow multiple windows but insist that they are arranged as tabs. This is dubbed ‘TDI’ for ‘Tabbed Document Interface’.

For example, almost all browsers now support different pages being shown on different tabs:


Even Internet Explorer now supports this in version 7, having held out as an SDI application in versions up to that point.

Disadvantages of Basic Tabbed Document Interfaces

There are two major disadvantages of pure tabbed interfaces for business applications:

  1. An obvious disadvantage of this design in its purest form is that it isn’t possible to display two windows alongside each other, and thus is difficult to arrange for drag and drop between them. Browsers typically get around this problem by allowing a new document to be shown in a new tab or a new window, thus creating a mixed TDI/SDI model. This is probably not an appropriate solution for business applications.
  2. Another disadvantage for the kind of business applications we are talking about here is that we may need to support a very large number of screens (windows) being open simultaneously. A simple tabbed arrangement can make it very difficult for a user to find the window they are after. On our current application we had to abandon all use of tabs for this reason (although we were also using a window manager, see below).

Montage Tabbed Interfaces – ‘IDE-style’ interfaces

Microsoft has recently moved some of its old ‘classic’ MDI applications into a tabbed format, but with surrounding property and window management panes:



Note that both of the screenshots above are of updated versions of the MDI applications we saw in the ‘classic’ MDI section above. They have the advantage over those older interfaces that we are less likely to ‘lose’ a window and usually won’t need to use a Window menu to navigate. The layout is tidier too.

Note also that SQL Server Management Studio solves problem 1 mentioned above in the section ‘Disadvantages of Basic Tabbed Document Interfaces’ by allowing the central tabbed area to be split horizontally or vertically into multiple ‘tab groups’:


Tree-Based Window Managers

These applications also show for the first time the now common approach for doing window management in modern user interfaces. We have a tree view in a pane to either the right or left of our main document window. This tree organizes our documents in a logical hierarchy, meaning we can find the document we are after fairly easily. Clicking or double-clicking in the tree will open the document or bring it to the front if it is already open. This is a big step forward from a Window menu that just listed open documents in a random order. However, the old MDI version of SQL Server Enterprise Manager also had a tree-based window manager, so it’s not a new idea.

These window managers mitigate problem 2 mentioned above in the section ‘Disadvantages of Basic Tabbed Document Interfaces’: we still have the problem that we may have a very large number of tabs open, but at least we have a tree that will let us go to the one we need.

Integrated Development Environment Interfaces

Of course both of the ‘Montage Tabbed Document Interface’ applications shown above resemble the older integrated development environment interfaces all developers are now familiar with from such applications as Visual Studio and Eclipse:


As we all know, this user interface is highly customizable. We can display tabbed documents alongside each other (as shown above) and drag between them. We can ‘tear off’ any of the surrounding panes and have them float free of the main window. We can easily then dock them to any part of the interface.

Again we have a tree-based window manager in the Solution Explorer pane that allows us to go directly to a tab if we have too many open to easily find the one we need (and we all know that can be a problem).

Disadvantages of the Integrated Development Environment User Interface

This kind of interface is great for an integrated development environment; it’s hard to find a developer who doesn’t like Eclipse or Visual Studio (at least now it doesn’t fall over every few minutes). However a user interface this complex may not be appropriate for a business application. The tear-off property windows in Visual Studio in particular are a nice idea, but lead to the old problem of you not necessarily being able to find your window, particularly if you have multiple screens. I doubt many developers actually use this feature: I certainly don’t.

Users of business applications usually don’t need to be able to highly customize the user interface. Indeed allowing them to do so can cause problems if a complex application is used in ways the developers are not expecting. In general it can be better to retain a little more control. It’s certainly harder to program (and hence more expensive to maintain) an application that allows multiple panes to be ‘torn off’ from the main application and kept synchronized with the application.

Montage Interfaces without Tabs – Outlook-style Interfaces

For some of the reasons cited above Outlook-style interfaces are currently very popular for business applications:


Even if you use this product every day it’s worth thinking about what’s good and bad about the interface.

We have a tree-based window manager, which in the corporate environment is often used to manage quite extensive trees of documents. Emails can be displayed in the main window, as shown, or can be in their own window with a separate icon in the Taskbar, which is useful to be able to find them when you are writing a new one whilst still checking new mail.

There are no tabs here either, other than the collapsible sections themselves, which clearly aren’t quite the same thing as allowing multiple documents to be opened in a tabbed view.

The pane to the left hand side has collapsible sections of course, allowing us to select completely different areas of functionality of the application. This is an attractive way of doing things if we are writing complex business applications. Of course it’s particularly attractive if we are writing a composite (CAB) application as each module can be on a different collapsible section.


What’s interesting about the screenshot above is that we don’t need a tree window manager for our notes, so Microsoft have used the space to put in a simple set of radio buttons to allow us to change the view. This really isn’t a great use of space, and this is one drawback of this design: for simple areas of functionality you may not need a window manager. You can find yourself trying to invent something useful to put in the Outlook pane when you don’t really need it.

However overall this isn’t a bad starting place for a design for a business application. It’s simple but quite powerful. Many of the tools vendors have realized this and ship components that support such an interface. In fact most of the major vendors seem to have fully mocked up versions of Outlook running using their tools as demonstrations.


This article has looked at some possible high level interface designs for business applications that need to display multiple windows of data, and considered some of the pros and cons of each. For a composite application an Outlook-style interface can be a good starting point.


Wikipedia on:
Multiple Document Interfaces
Single Document Interfaces
Tabbed Document Interfaces

Microsoft Design Specifications and Guidelines – Window Management

Table of Contents for ‘Introduction to CAB/SCSF’ Articles (2)

I’ve revised the table of contents to give some detail on what each of the articles is about:

Part 1 Modules and Shells

A guide to these two core concepts without the need to understand dependency injection or WorkItems. Explains what a composite application is and why we might want one, and shows a naive application that uses the CAB to run three separate projects simultaneously without them referencing each other. Also explains some of the mysteries of how CAB applications behave at start-up.

Part 2 WorkItems

A quick initial look at WorkItems, explaining their importance both as containers of code and as a hierarchy that allows us to control the scope of the code.

Part 3 Introduction to Dependency Injection

A discussion of dependency injection and why it’s useful in general, without reference to the Composite Application Block. A code example is given. The relationship to the strategy pattern is examined, as well as the various different types of dependency injection.

Part 4 An Aside on Inversion of Control, Dependency Inversion and Dependency Injection

A discussion of the concepts of inversion of control and dependency inversion, and how they relate to dependency injection. Again these concepts are discussed without direct reference to the Composite Application Block.

Part 5 Dependency Injection and the Composite Application Block

This article finally revisits the Composite Application Block, showing how we can use dependency injection to get hold of WorkItems in projects that are not conventionally referenced, and hence access the objects in their containers. It discusses the various ways of doing dependency injection in the CAB using the attributes ComponentDependency, ServiceDependency and CreateNew, and gives an example illustrating this. It further discusses the ObjectBuilder briefly, and explains how dependency injection works in the WorkItems hierarchy.

Part 6 Constructor Injection in the Composite Application Block

A brief article on how to use constructor injection with the CAB, and why we might not want to.

Part 7 Introduction to Services in the Composite Application Block

Discusses what services are in general, what they are in the Composite Application Block, and how the Services collection differs from the Items collection. Gives a basic example, and an example of splitting interface from implementation in a service.

Part 8 Creating and Using Services in the Composite Application Block

Dives into services in much more detail, including an in-depth examination of the various ways of creating and retrieving services.

Part 9 The Command Design Pattern

Another article looking at some theory without direct reference to the Composite Application Block: explains the command pattern, how it relates to .NET, and why its a good thing if you’re writing menus.

Part 10 Commands in the Composite Application Block

Shows how to use Commands in the Composite Application Block to hook up clicks on menus to their handlers. Explains why we might want to do it this way rather than with the more usual .NET approach using events. Looks at how to handle Status with Commands, the parameters passed to a CommandHandler, and discusses writing your own CommandAdapters to handle other invokers than menus. Gives a CommandAdapter example.

Part 11 Introduction to Events in the Composite Application Block

Recaps the usual events in .NET and explains why we might want something simpler. Gives a basic example of the Composite Application Block’s alternative approach.

Part 12 Events in the Composite Application Block

Goes into detail of what we can do with the Composite Application Block’s events: examines the handling of scope, how the EventTopics collection works, use of the ThreadOption enumeration to ensure that our event executes on the GUI thread, more flexible event handling with AddSubscription and RemoveSubscription, hooking up .NET events to CAB events with AddPublication, and how to disable CAB events.

Events in the CAB (Introduction to CAB/SCSF Part 12)


Part 11 of this series of articles gave a general introduction to events in the CAB. This article investigates what we can do with these events in a little more detail.

Parameters of the Fire Method

As shown in part 11, the Fire method has four parameters:

workItem.EventTopics["MyEvent"].Fire(this, EventArgs.Empty, null, PublicationScope.Global);

The first two are easily understood: they are the parameters that will be passed into the EventSubscription method. The first is an object, and is intended to contain the sender of the event. The second is an EventArgs parameter, and can be used to pass data into the EventSubscription method as with normal EventArgs classes.

The third and fourth parameters control the scope that the CAB will use for searching for appropriate EventSubscription methods to be called. The third parameter is a WorkItem, and the fourth an item from the CAB PublicationScope enum. It is expected that you will pass in the WorkItem that contains the code firing the event, although you don’t have to. For these purposes the EventSubscription is treated as being contained in the WorkItem that its parent subscriber object is in (remember it has to be in the Items collection of a WorkItem for the eventing to work).

The PublicationScope enum has the following possible values:

  • Global: the WorkItem parameter is ignored and ANY EventSubscription method with the correct name in the entire WorkItem hierarchy will be called.
  • WorkItem: ONLY EventSubscription methods with the correct name in the WorkItem passed in as the third parameter will be called. If no WorkItem is passed in no EventSubscription method will be called.
  • Descendants: EventSubscription methods with the correct name in the WorkItem passed in and any child WorkItems of that WorkItem will be called.

Code Example

A code example that shows these possibilities is available. This defines a hierarchy of WorkItems. Each WorkItem has its own Subscriber class to an event (“MyEvent”). The EventSubscriptions in each Subscriber class display an appropriate message when the event is fired. We then have three buttons that fire the event. The calls to the Fire method pass in as parameters a WorkItem in the middle of the hierarchy, with different PublicationScopes depending on which button is clicked.

Thus clicking the buttons shows how the PublicationScope affects which EventSubscriptions get called.

The EventTopics Collection

There are actually two WorkItems in the call to the Fire method:

workItem.EventTopics["MyEvent"].Fire(this, EventArgs.Empty, childWorkItem1, PublicationScope.WorkItem);

‘childWorkItem1’ is used to control the scope of the Fire method as discussed above.

‘workItem’ is used to access the EventTopics collection and hence our specific ‘MyEvent’ EventTopic.

Note that ANY WorkItem in the hierarchy can be used here to access the EventTopics collection, and the same EventTopic will be returned. In fact, behind the scenes there is only one EventTopics collection, and this is stored on the RootWorkItem. Any other WorkItem using the syntax workItem.EventTopics gets the same collection returned.

Thus the first WorkItem in the call does not affect the scope of the EventSubscriptions called at all.

Invoking onto the user interface thread

If you are familiar with .NET events you will know that one problem with them is that they can be fired on threads other than the user interface thread but may need to access user interface components. Threads that are not on the user interface thread should not interact with Microsoft’s GUI components as these are not thread-safe. As a result with .NET eventing it is quite common to ‘invoke’ back onto the user interface thread in an event handler as the first thing you do:

        private void RunIt()
            if (((ISynchronizeInvoke)this).InvokeRequired)
                this.Invoke(new MethodInvoker(RunIt));
                this.label1.Text = "Hello";

Don’t worry if you don’t recognize and understand this syntax: just accept that we may need to get code running back on the user interface thread in certain circumstances, and that this is the way we do it.

Clearly CAB events can suffer from the same problem. Once again we have a very neat solution to this, however. We simply add a parameter to our EventSubscription attribute as below:

        [EventSubscription("MyEvent", ThreadOption.UserInterface)]
        public void MyEventHandler(object sender, EventArgs e)
            MessageBox.Show("Hello from the CAB event handler");

This has the effect of invoking the code onto the user interface thread when the event is fired and the code is called. It’s somewhat easier than the .NET code in the previous example.

The ThreadOption Enumeration

There are two other values in the ThreadOption enumeration that we can use here (other than ThreadOption.UserInterface as above):

  • ThreadOption.Publisher: forces the code to run on the same thread as the one the EventTopic was fired on. This is the default if we don’t specify a ThreadOption on our EventSubscription.
  • ThreadOption.Background: forces the code to run asynchronously on a background thread. This means the code in the EventSubscription does not block the thread in the class the calls ‘Fire’. With normal .NET events we would normally have to explicitly start a second thread to get this behaviour, so again the syntax is much simpler.


The syntax shown above for setting up a subscription to an EventTopic using the EventSubscription attribute is very clean. However, there will be times when we want to dynamically add or remove subscriptions in code rather than using attributes. This is analogous to the use of the ‘+=’ and ‘-=’ syntax for hooking up our usual .NET events to event handlers.

To support this EventTopic has AddSubscription and RemoveSubscription methods. Obviously we use AddSubscription to add an EventSubscription to an EventTopic, as below:

RootWorkItem.EventTopics["MyEvent"].AddSubscription(subscriber, "MyEventHandler", workItem1, ThreadOption.UserInterface);

This should be fairly self-explanatory: we are setting up an EventSubscription for the method MyEventHandler in our subscription object. We are setting up this subscription in workItem1, and when the event handler is called it will run on the user interface thread.

Similarly we use RemoveSubscription to remove an EventSubscription from an EventTopic:

eventTopic.RemoveSubscription(subscriber, "MyEventHandler");

Here we simply need to identify the object and the event handler name that we are trying to remove.

We are only permitted to have one subscription to a given event handler on a given object. This is why RemoveSubscription only needs the two parameters to uniquely identify the subscription to be removed. If we try to add a subscription that already exists then the CAB won’t throw an exception, nor will it add a second subscription. Similarly we can try to remove a subscription that doesn’t exist and the CAB won’t throw an exception (but won’t actually do anything of course).

An example that demonstrates AddSubscription and RemoveSubscription is available.

Note that in the CAB if we want to prevent our EventSubscriptions from running when an event is fired we don’t have to remove them entirely. The EventTopic has an Enabled property that can be used. There are more details on this later in this article.


The AddPublication method of an EventTopic is used to add .NET events as ‘Publications’ into a CAB EventTopic. What this means is that we can fire a .NET event and have CAB EventSubscriptions run without the need to set up .NET event handlers directly ourselves, or to explicitly call the Fire method of the EventTopic. Similarly we have a RemovePublication event to disable this behaviour.

AddPublication: what is a ‘Publication’?

The ‘Publication’ nomenclature is a little confusing. As we have seen the CAB eventing mechanism uses ‘Subscriptions’ to an EventTopic, which are methods that run when the associated EventTopic is fired.

However, in general the CAB eventing mechanism doesn’t use ‘Publications’. The Subscriptions will run without an explicit ‘Publication’ being set up at all: we can just Fire the EventTopic when we need to.

The method containing the Fire event code can be thought of as a ‘Publication’. However, if we look at the PublicationCount of an EventTopic after it has been fired directly with the ‘Fire’ method we see that it is zero: normally we don’t need a Publication for a CAB event to work.

A code example that shows this is available. It also shows how to use the ContainsSubscription method of an EventTopic (which is straightforward).

With the AddPublication method we ARE explicitly creating a Publication, and in the example below the PublicationCount will be one when an EventTopic is fired. But this is only for the special case where we want to hook .NET events up to CAB event subscriptions.

AddPublication: code example

A code example of how to use AddPublication is available. In this example we have a button on the Shell form that fires our CAB event, but there is NO .NET event handler set up for the click event of that button. Instead at when the application starts up we hook up the EventTopic to the button directly:

RootWorkItem.EventTopics["MyEvent"].AddPublication(Shell.cabEventFirerButton, "Click", RootWorkItem, PublicationScope.Global);

Here Shell.cabEventFirerButton is the button on the Shell form, and obviously ‘Click’ is the name of the .NET event that we want to be a Publication in our MyEvent EventTopic. Once this code has been run if we click the button the EventTopic will fire and any associated EventSubscriptions will run. We don’t need to explicitly call the Fire event of the EventTopic.

In the section ‘Parameters of the Fire Method’ we saw that when we call the ‘Fire’ method we can specify the scope that the CAB will use to search for EventSubscriptions. If we use AddPublication as shown here we are not calling the ‘Fire’ method directly. Instead we can specify the scope parameters in the AddPublication call: they are the final two parameters to the call as shown above. These are a WorkItem and a member of the PublicationScope enum as before, and work in the same way.

Issues with AddPublication

The AddPublication syntax is a very powerful way of hooking up .NET events to CAB EventSubscriptions. However, it needs to be used with care. Developers expect there to be a .NET event handler for a .NET event, and it can be very confusing if code is running as a result of an AddPublication call.

For example, in the code above if you were trying to work out what happens when you click the button you could easily think there’s no code going to run at all. There’s no easy way to find out that the click event is a CAB publication and what the associated EventTopic is.

As a result my feeling is that direct use of AddPublication as shown in the example above should be used sparingly. It’s clearer to hook up the .NET event handler and then call the ‘Fire’ event of your EventTopic directly in the handler.

EventTopic Enabled Property

The EventTopic class has an ‘Enabled’ property. By default this is set to true, meaning that when the EventTopic is fired all the associated EventSubscriptions will run. However, we can simply set this property to false to disable all the EventSubscriptions of the EventTopic.

Once again this can be useful and there’s no easy way of doing it with traditional .NET eventing.

An example showing this is available. This modifies the example above used to demonstrate changing PublicationScope. The example is set up to have multiple EventSubscriptions to one EventTopic. All of these normally get called when a button is clicked and the EventTopic is fired.

The example uses a checkbox. When the checkbox is checked the EventTopic is enabled and firing the EventTopic runs all the EventSubscriptions, when it is cleared the EventTopic is disabled and the EventSubscriptions do not run. This is achieved with the code below in the CheckedChanged event of the checkbox:

        private void eventsEnabledCheckbox_CheckedChanged(object sender, EventArgs e)
            rootWorkItem.EventTopics["MyEvent"].Enabled = eventsEnabledCheckbox.Checked;


Events in the CAB are syntactically cleaner and are easier to use than normal .NET events, and can give us greater control over the scope of what runs when they are fired.

Commands in the CAB (Introduction to CAB/SCSF Part 10)


Part 9 of this series of articles discussed the Command design pattern. Commands in the CAB are a neat way of implementing this pattern. This article will examine them in some detail.

Commands and Events

As already discussed in part 9, commands in the CAB are closely related to events. In fact one of the ways that commands are intended to be used in the CAB is to hook up menus and toolbars in your application to the underlying code. This, of course, is something that is normally done in .NET using events. To make matters somewhat confusing, the CAB also has its own way of handling events. Part 11 of this series of articles will discuss CAB events, and how they relate to commands. In a later article I will discuss the Action Catalog (which is part of the SCSF code) and how that relates to both commands and events.

Basic Example

The easiest way to understand what we can do with CAB commands is to look at a simple example. The code for this is available.

This is a normal CAB application as described earlier in this series of articles. The main Shell Form has a standard ToolStrip with a button labelled ‘Call Command’ on it. When we click this button the application will display a message box saying ‘Hello World using CAB commands’.

Normally to do this in .NET we’d set up a standard event handler in the code behind the Shell Form, probably by just double-clicking the button in the designer, and then put our message box code in the handler. An example of this is also in the simple example code. We have a second button labelled ‘Call Event’. When it’s clicked it uses .NET events to display a message box saying ‘Hello World using .NET events’.

To use CAB commands we assign a named command to the button’s click event. We do this by referring to a named command in a WorkItem’s Commands collection:

Command helloCommand = RootWorkItem.Commands["HelloCommand"];

When the CAB sees this code it lazy-initializes the Command. That is, if a Command called ‘HelloCommand’ already exists in the collection it returns it, if it doesn’t exist it creates it and returns it.

(In the sample application the Command object actually gets created before this code, because the CAB scans for the CommandHandler attribute and creates commands based on that.)

Next we obtain a reference to our button (a ToolStripItem), and assign an ‘Invoker’ to the Command object:

            ToolStripItem toolStripItem = this.Shell.mainToolStrip.Items["toolStripButton1"];
            helloCommand.AddInvoker(toolStripItem, "Click");

In our Command design pattern discussion above we saw that an invoker is any piece of code that calls the Execute method on a Command object. Here we are saying that the toolStripItem’s Click event is an invoker for the helloCommand object. That is, when the event fires the Execute method on the Command will get called.

In simple terms we are just hooking up the Click event to our Command object.

Now we want to set up a receiver for the Command. Remember that a Receiver, if we have one, is code that actually does the work for the command. In the CAB we always have a Receiver: we never put the underlying functionality into the Command object. Note that this means the Command class itself is just plumbing code, and as developers we don’t need to change it.

We do this by simply applying the CommandHandler attribute to a method with an appropriate signature as below:

        public void HelloHandler(object sender, EventArgs e)
            MessageBox.Show("Hello world using CAB commands");

For this to work the object the code is in must be in a WorkItem collection of some kind. Of course, this is true of most of the CAB functionality as we’ve seen before. Here we really are using WorkItems as Inversion of Control containers. We set up the code as above and the framework calls us back as appropriate.

In the example code the command handler is in the code behind the Shell Form, which as we’ve seen before is in the WorkItem’s Items collection.

That’s all there is to it. Now when we click our toolstrip button the handler gets called and ‘Hello world’ gets displayed.

Points to Note

  1. The command handler’s method signature has to be as shown. It has to be public, and it has to have object and EventArgs parameters. I’ll discuss the parameters further below.
  2. There’s no ICommand interface in the CAB’s implementation. The interface to a Command object is just the default public interface on the Command class.
  3. As we’d expect, the Command object also has a RemoveInvoker method that lets us detach a command from its invoker.

Why Are We Doing This?

An obvious question at this point is ‘why we would want to do this?’ After all, we already have a perfectly good way of handling menu and toolstrip click events in .NET. Using a Gang of Four design pattern is nice, but is it giving us any real advantages?

We discussed one advantage of the Command pattern approach above. We can easily swap one command for another. This is particularly useful, for example, if we set up a standard toolbar that is going to be used with slightly different functionality for multiple screens in an application.

Another advantage is that our command handlers don’t have to be in the same class as the toolbar or menu they are supporting. This can make the code a lot cleaner. For example, suppose you have a Help/About menu item that shows a dialog, and you want the same functionality on all your menus. With .NET events you’d have to mess around setting up all the handlers to work correctly, probably with some singleton class to actually accept the calls. With the CAB command approach you can just set up a class with command handlers in it, add it to a WorkItem collection (Items) and then just call AddInvoker for every menu you want hooked up.

In fact the Command pattern lets us write incredibly simple and powerful menu systems for enterprise applications. We can do this in a way that is difficult to do with the standard event approach. I will write specifically about this at a later date, as there seems to be a lot of confusion about how you’re meant to set up menus using the CAB/SCSF.

Command Status

One other useful thing that you can do with CAB commands is to enable, disable or hide the ToolStrip buttons or menu items associated with a command simply by setting the status property on the associated command (the one that will be invoked when you click the button).

An example of this is available. This has two buttons on a ToolStrip on its Shell Form. If you click the ‘Enable/Disable CAB Command’ button the command handler below will run:

        public void EnableDisableHelloCommandHandler(object sender, EventArgs e)
            Command helloCommand = _workItem.Commands["HelloCommand"];
            if (helloCommand.Status == CommandStatus.Enabled)
                helloCommand.Status = CommandStatus.Disabled;
                // Change this to the line below if you want to hide the cabCommandToolStripButton
                //helloCommand.Status = CommandStatus.Unavailable;
                helloCommand.Status = CommandStatus.Enabled;

As you can see this gets a reference to the helloCommand command in the Commands collection. This is the command associated with another button on the ToolStrip. The handler just flips the Status associated with the command from CommandStatus.Enabled to CommandStatus.Disabled or vice-versa. If you run this code you will see that this disables or re-enables the associated button.

Command Handler Parameters

Note that our command handler has the usual .NET event parameters. These are sender (an object) and e (EventArgs). However, if you put a breakpoint in the command handler you’ll see that sender is the Command object, and e is set to EventArgs.Empty. It isn’t possible to pass other values to these parameters if you are using commands.

In the discussion on the Command design pattern above we saw that we don’t pass any parameters to our Execute method. This is true of the Execute method in the CAB framework as well. However, behind the scenes the CAB uses normal .NET events to implement the Command pattern, and this allows it to pass more normal parameters to the command handler.

Note also that this is not significantly different from the .NET events we would normally use with a ToolStripItem. If we set up a normal event handler this has the same parameters, but again the EventArgs parameter is always set to EventArgs.Empty, and the object parameter is set to the ToolStripItem. Again, there’s no easy way of passing other values to these parameters.

When you are using .NET events in this way it can be useful to have access to the ToolStripItem that raised the event, which you can do via the object parameter. This is more difficult with commands. We can access a list of invokers for the command, and get the associated ToolStripItems from the list (although even this is difficult). However, we don’t necessarily know which of the invokers called the command handler.

Where’s the Execute Method?

In the examples here we haven’t seen any reference to an Execute method, in spite of this seeming to be a key part of the Command design pattern as described above.

Rest assured that behind the scenes in the CAB code the Command object DOES have an Execute method that gets called by the invoker. However, all we needed to do in our example was to set up the invoker with some simple syntax to get it to call the Execute method. We didn’t need to call it ourselves.

We can call the Execute method on our Command object directly from code, as this example shows. This lets us use commands in other ways than the standard invoker example seen above.

Which Events can we use as Invokers?

We’ve seen in our example that a ToolStrip button ‘Click’ event can be an invoker for a command. As mentioned above, the key use of commands is intended to be for menu systems, where they are very powerful.

However, we can only use the syntax in the examples above for standard .NET events on ToolStripItems and anything that derives from the Control class. If we want to hook anything else up as an invoker we need to do a little more work. Behind the scenes the CAB is using something called a CommandAdapter to hook up these events as invokers to our commands. The only CommandAdapters that are registered by default are those for ToolStripItem and Control.

We can use standard .NET events on one of our own classes as an invoker. However, to do this we need to create a CommandAdapter and tell the CommandAdapterMapService that it relates to our own class. Fortunately there is a generic EventCommandAdapter<> class that we can use (with our class type as the generic), rather than having to write our own CommandAdapter class.

A full code example of how to do this is available.

We would have to write our own CommandAdapter class if we wanted to use something other than a .NET event as an invoker (and still wanted to use the CommandHandler etc pattern). To do this we inherit the abstract base class CommandAdapter and override its abstact methods (which, predictably, include AddInvoker and RemoveInvoker).


CAB commands provide a powerful way of setting up flexible menus. However, using the Command pattern in a more general way can be a little confusing. Writing your own CommandAdapter class, or indeed using your own events as invokers as illustrated above, isn’t necessarily straightforward. Also, as we shall see in the part 11, in fact CAB events are probably better suited for this sort of thing. It may be better to think of CAB commands as primarily something you use to get powerful menu systems, and to move on.

Creating and Using Services in the CAB (Introduction to the CAB/SCSF Part 8)


Part 7 of this series of articles gave us a general introduction to services in the CAB. This article will go into more detail on the various ways we can create and use such services.

Ways of Creating a Service

We start with the various ways services can be created. This can be done with the various ‘Add’ methods, with XML configuration files or by using the ‘Service’ attribute.

Ways of Creating a Service (1) – Add Methods

In the basic example in part 7 we used the AddNew method to create a service:


We have seen this before: it both instantiates the object and adds it to the collection. As before, we can also add objects that already exist to the Services collection with the Add method.

The Services collection also has an ‘AddOnDemand’ method. If we use this in place of AddNew in the example in part 7 the service does not immediately get created (the MyService class is not instantiated). Instead a placeholder is added to the Services collection until such time as some client code retrieves the service (using the same syntax as before). When this happens the service object will get instantiated so that it can be used. This example shows this:

            // Use AddOnDemand to set up the service: the MyService constructor
            // is not called
            // When we dislay the Services collection we can see there's a placeholder
            // for MyService in there
            // Only when we use .Get to retrieve the service is MyService actually
            // instantiated (note we have code in MyService to show when the constructor
            // is called by writing to the Output window)
            // Now our Services collection has a fully fledged MyService service available

There are also Contains and Remove methods on the Services collection. Remember we can only have one service of a given type: if a service already exists and we want to replace it these methods can be useful.

Ways of Creating a Service (2) – XML Configuration File

It is also possible to create services using the app.config file. To do this in our simple example we just take out the line:


Then in an App.Config file we add a ‘services’ section with an ‘add’ element to a CompositeUI config section as below:

<?xml version="1.0" encoding="utf-8" ?>
      <section name="CompositeUI" type="Microsoft.Practices.CompositeUI.Configuration.SettingsSection, Microsoft.Practices.CompositeUI"  allowExeDefinition="MachineToLocalUser" />
      <add serviceType ="Shell.MyService, Shell" instanceType ="Shell.MyService, Shell" />    

The code for this is available.

In general I’m not a fan of writing code in XML if there are proper C# alternatives. Here the XML is certainly less transparent than the one-line C# equivalent, and as usual debugging becomes more difficult with XML. However, one potential advantage of using the configuration file is that we could in theory change our service at runtime without having to recompile the code.

Ways of Creating a Service (3) – the Service Attribute

As mentioned previously, we can create a service simply by decorating our concrete class with the ‘Service’ attribute. We can register the service with a separate interface (as in the section ‘Splitting the Interface from the Implementation’ in part 7) by providing a positional type parameter. We can also make our service one that gets added on demand to the Services collection by adding a named boolean parameter called ‘AddOnDemand’. These attributes are illustrated below:

    [Service(typeof(IMyService), AddOnDemand=true)]
    public class MyService : IMyService
        public string GetHello()
            return "Hello World";

If we declare our service class in this way we have no need to explicitly add it to the Services collection before using it. There’s also no need to explicitly instantiate the class. Just adding the attribute ensures that when the code runs the service will get set up. The code showing this working is available.

Why the Service Attribute is Unusual

The ‘Service’ attribute is in some ways quite different from other attributes we’ve seen used with the CAB. Most CAB attributes only work for objects that are already in a collection associated with a WorkItem. For examples see the discussion about ComponentDependency, ServiceDependency and CreateNew in part 5 of this series of articles. In particular CreateNew will only work on a setter if that setter is in an object that is already in a WorkItem collection. We can’t just put CreateNew in any old class and expect it to work.

In contrast the Service attribute will work with ‘any old class’, provided it’s in a module (see part 1 for a discussion of modules). The Service attribute really couldn’t work any other way. The attribute when applied to a class is telling the CAB to add an object of that type to the Services collection of the WorkItem. It wouldn’t make much sense if it only worked if the object was already in a collection of the WorkItem.

Where the CAB is Looking for the Service Attribute

So how does the CAB find these Service objects and use them? The answer is that when a module loads the CAB uses reflection to find all public classes in the assembly which have the ‘Service’ attribute applied. All of these classes get instantiated and added in to the Services collection of the root WorkItem of the CAB application.

Note that the CAB only scans assemblies that are explicitly listed as modules (in ProfileCatalog.xml usually). An assembly won’t get scanned if it’s just referenced from a module project.

Drawbacks of the Service Attribute

One problem with this is that we don’t have a lot of control over where the service gets created. Our new service always gets added to the root WorkItem, meaning we can’t create services at a lower level in the WorkItem hierarchy. Another problem is that we have no control over when our service is created: in particular we have no way of ensuring that our services are created in a specific order.

My personal opinion is that setting up services using the Service attribute can be a little confusing. The services appear magically as if from nowhere. If we explicitly create the service and add it to the appropriate WorkItem we have more control and what we are doing is more transparent.

Ways of Retrieving a Service

There are two main ways of retrieving a service. We have already seen examples of these, but a recap is given below.

Ways of Retrieving a Service (1) – Get Method

In the basic example in part 7 we used the Get method of the Services collection to retrieve MyService. For example, the code below is taken from the final example (‘Splitting the Interface from the Implementation’):

        private void UseMyService()
            IMyService service = RootWorkItem.Services.Get<IMyService>();

Ways of Retrieving a Service (2) – Dependency Injection

We can also retrieve a service via dependency injection by using the ServiceDependency attribute. We saw some examples of this in part 5.

To set up a service in a class we can decorate a setter of the appropriate type in a class with the ServiceDependency attribute. The class can then use the service:

    public class ServiceClient
        private IMyService service;
        public IMyService Service
                service = value;
        internal string UseMyService()
            return service.GetHello();

As discussed previously, the CAB looks for the ServiceDependency attribute when an object of type ServiceClient is added to one of the WorkItem collections. When that happens the CAB looks for a service of type IMyService in the Services collection of the WorkItem. When it finds one it retrieves it and sets it on the ServiceClient object by calling the setter.

So to set up this class we need to ensure that an IMyService service has been created, and then we can just create a ServiceClient object in our WorkItem:

            // Create the service
            RootWorkItem.Services.AddNew<MyService, IMyService>();
            // Add a ServiceClient object to our Items collection:
            // this causes the CAB to inject our service into the ServiceClient
            // because it has a setter decorated with ServiceDependency
            ServiceClient serviceClient = RootWorkItem.Items.AddNew<ServiceClient>();

Now we can call the service on the ServiceClient object:


The code for this example is available.

We can also use the ServiceDependency attribute with constructor injection as discussed in part 6. This is a simple change to the ServiceClient class in the example above:

    public class ServiceClient
        private IMyService service;
        public ServiceClient([ServiceDependency]IMyService service)
            this.service = service;
        internal string UseMyService()
            return service.GetHello();

The code for this example is also available.

Finding Services Higher Up the Hierarchy

As already discussed, if the CAB can’t find a service in the Services collection of the current WorkItem it will look in the Services collections of parent WorkItems. We can illustrate this by adding a new WorkItem called ‘testWorkItem’ to our basic example from part 7. We still add our service to the RootWorkItem:

        WorkItem testWorkItem = null;
        protected override void AfterShellCreated()
            testWorkItem = RootWorkItem.WorkItems.AddNew<WorkItem>();
            RootWorkItem.Services.AddNew<MyService, IMyService>();
        private void UseMyService()
            IMyService service = testWorkItem.Services.Get<IMyService>();

When we come to use the service in UseMyService (immediately above) we try to retrieve it from the testWorkItem. The code still works even though the service isn’t in testWorkItem’s Services collection: the CAB retrieves it from the parent RootWorkItem. Once again the code for this example is available.

Services Not Found

If the CAB attempts to retrieve a service and can’t find it at all it usually does not throw an exception. It simply returns null. Consider the changes to our basic example from part 7 below:

        protected override void AfterShellCreated()
            //RootWorkItem.Services.AddNew<MyService, IMyService>();
        private void UseMyService()
            // There's no IMyService available, so the CAB sets service = null below
            IMyService service = RootWorkItem.Services.Get<IMyService>();
            // We get a NullReferenceException when we try to use the service

Here we have commented out the line that creates the service so it never gets created. As a result the call to ‘Get’ the service returns null, and we get a NullReferenceException when we try to call GetHello.

This may not be the behaviour we want. It may be better to throw an exception as soon as we know the service does not exist before we attempt to use it. Fortunately the Get method is overloaded to allow us to do this. It can take a boolean argument, EnsureExists, which if set to true throws a ServiceMissingException immediately the service cannot be retrieved:

IMyService service = RootWorkItem.Services.Get<IMyService>(true);

The code for this example is available.


This article has shown us how to use services in the CAB in some details. The next two articles will examine commands in the CAB: part 9 will recap the Command design pattern, and part 10 will explain how this is implemented for menus using commands in the CAB.