Book Review: ‘Programming Microsoft Composite UI Application Block and Smart Client Software Factory’ by David S Platt (Microsoft Press)

I was looking forward to this book being published as there really isn’t very much documentation available for either the Composite UI Application Block or the Smart Client Software Factory. Microsoft’s own documentation is quite weak, and to use these technologies you find yourself repeatedly referring to the code itself or community blogs and websites.

However I have to say Platt’s book isn’t the answer to these problems. Firstly it’s very short. It’s true that there are nearly 200 pages, but there’s a lot of white space, big diagrams and padding throughout the book. The CAB/SCSF is now quite a large and complex piece of software and inevitably Platt can only skim the surface of the technology in such a short book.

Secondly Platt makes no real attempt to explain some of the core concepts behind the CAB/SCSF. For example, the CAB uses dependency injection and DI containers heavily, and many Microsoft developers will not have met these concepts before. Platt makes no real attempt to explain what these things are and why we might want to use them in a smart client application. This is also a criticism that can be levelled at the Microsoft documentation. Platt is better on why we might use the CAB to achieve loose coupling between parts (‘modules’) of a smart client application. But in general developers coming to the CAB struggle with the concepts more than the code and Platt has focused heavily on the code.

Thirdly Platt is quite selective on which parts of the framework he covers. For example there is a chapter on the Action Catalog, which is quite an esoteric part of the SCSF technology. However there’s no real discussion of WorkItem State, which is a much more core concept and causes a lot of confusion. The Action Catalog is just one of several new services in the latest SCSF, and Platt doesn’t discuss the others in the same detail (e.g. WorkspaceLocator, EntityTranslator).

On the plus side the book is an easy read, and it does have a strong introduction where in 30-odd pages Platt gives a good initial overview of the subject. On the subjects he does cover Platt is factually accurate and informative. Having read the book I do feel I have a better understanding of how the technology works.

In the absence of any real alternative, and given that it is quite cheap, this book is worth purchasing for a quick read to give you a selective overview of the subject. But it is too short and unfortunately it’s far from being the definitive guide I was hoping for.


A Beginner’s Guide to calling a .NET Library from Access


In an earlier blog article I described how to call a .NET Library from Excel. I have subsequently received several requests for a similar article dealing with calling .NET from Microsoft Access. This article addresses those requests.

In fact the process and issues are almost identical, which means the two articles overlap heavily. Rather than continually referring to the earlier article, however, I have here included sections from that article verbatim. If you’ve worked through the earlier article you really don’t need to work through this one as well. However, if you are interested in Access and not Excel, this is the place to start.

As with Excel, it’s actually very easy to call a .NET library directly from Access, particularly if you are using Visual Studio 2005, and you don’t need Visual Studio Tools for Office. This article explains how to do this.

A Basic Walk Through

We’ll start by walking through a very basic example. We’ll get Access to call a .NET method that takes a string as input (for example “ World”) and returns “Hello” concatenated with that input string (so, for example, “Hello World”).

1. Create a C# Windows class library project in Visual Studio 2005 called ‘DotNetLibrary’. It doesn’t matter which folder this is in for the purposes of this example.

2. To call a method in a class in our library from Access we simply need a class containing any methods we want to call. For this walk through just copy and paste the following code into our default class file:

using System;
using System.Collections.Generic;
using System.Text;
namespace DotNetLibrary
    public class DotNetClass
        public string DotNetMethod(string input)
            return "Hello " + input;

That’s it: if you look at existing articles on the web, or read the MSDN help, you might think you need to use interfaces, or to decorate your class with attributes and GUIDs. However, for a basic interop scenario you don’t need to do this.

3. Access is going to communicate with our library using COM. For Access to use a COM library there need to be appropriate entries in the registry. Visual Studio can generate those entries for us.

To do this bring up the project properties (double-click ‘Properties’ in Solution Explorer). Then:
i) On the ‘Application’ tab click the ‘Assembly Information…’ button. In the resulting dialog check the ‘Make assembly COM-visible’ checkbox. Click ‘OK’.
ii) On the ‘Build’ tab check the ‘Register for COM interop’ checkbox (towards the bottom: you may need to scroll down).

4. Build the library.

5. Start Access and create a new blank Access database. Call it anything you like. Open the VBA code editor. To do this in Access 2007 go to the Database Tools tab on the ribbon, and then click ‘Visual Basic’ at the left end. In earlier versions of Access go to Tools/Macro/Visual Basic Editor.

6. We now need to include a reference to our new library. Select ‘References’ on the Visual Basic Editor’s ‘Tools’ menu. If you scroll down in the resulting dialog you should find that ‘DotNetLibrary’ is in the list. Check the checkbox alongside it and click ‘OK’.

7. Now add a new code module. You can do this with the Insert/Module command on the menu. Paste the VBA code below into the code window for the module:

Private Sub TestDotNetCall()
Dim testClass As New DotNetClass
MsgBox testClass.DotNetMethod(“World”)
End Sub

8. Click anywhere in the code you’ve just pasted in and hit ‘F5’ to run the code. You should get a ‘Hello World’ message box.

Getting Intellisense Working in Access

Whilst the VBA code above compiles and executes, you will discover that intellisense is not working in the code editor. This is because by default our library is built with a late binding (run-time binding) interface only. The code editor therefore doesn’t know about the types in the library at design time.

There are good reasons for only using a late-bound interface by default: with COM versioning libraries can become difficult with early-bound interfaces. In particular, if you change the early-bound interface by adding, for example, a method in between two existing methods you are likely to break existing clients as they are binding based on the order of the methods in the interface.

For similar reasons you are heavily encouraged to code your interface separately as a C# interface and then implement it on your class, rather than using the default public interface of the class as here. You then should not change that interface: you would implement a new one if it needed to change.

For more on this see:

However, we can build our library to use early bound interfaces, which means intellisense will be available. To do this we need to add an attribute from the System.Runtime.InteropServices namespace as below:

using System;
using System.Collections.Generic;
using System.Text;
using System.Runtime.InteropServices;
namespace DotNetLibrary
    public class DotNetClass
        public DotNetClass()
        public string DotNetMethod(string input)
            return "Hello " + input;

If you change your code as above it will expose an ‘AutoDual’ interface to COM. This means it is still exposing the late-bound interface as before, but now also exposes an early-bound interface. This means intellisense will work.

To get this working:

1. Come out of Microsoft Access. Access will lock the DotNetLibrary dll and prevent Visual Studio from rebuilding it unless you close it. Remember to save your new code module.

2. Go back into Visual Studio, change the DotNetClass as shown above, and rebuild the library.

3. Re-open your Access database. Once again if you are using Access 2007 there is an extra step: you need to explicitly enable macros. A warning bar will appear beneath the ribbon saying ‘Certain content in the database has been disabled’. Click the ‘Options’ button next to this, select ‘Enable this content’, and click OK.

4. Access can get confused about the interface changes unless you re-reference the library. To do this go to Tools/References. The DotNetLibrary reference should be near the top of the list now. Uncheck it and close the window. Now open the window again, find the library in the list, and re-check it (trust me, you need to do this).

5. Now run the code and it should still work (put a breakpoint in the routine and hit F5).

6. Enter a new line in the routine after the ‘MsgBox’ line, and type ‘testClass.’. When you hit the ‘.’ you should get an intellisense dropdown which shows that DotNetMethod is available. See below.

Intellisense in Access

Let me re-iterate that this works and is fine for development, but for release code you are better off using the default late binding interfaces unless you understand the full versioning implications. That is, you should remove the ClassInterface attribute from your code when you do a release.


In the example here we are using Visual Studio to register our .NET assembly on the workstation so that Access can find it via COM interop. However, if we try to deploy this application to client machines we’re not going to want to use Visual Studio.

Microsoft have provided a command-line tool, regasm.exe, which can be used to register .NET assemblies for COM interop on client workstations. It can also be used to generate a COM type library (.tlb) separate from the main library (.dll), which is considered good practice in general.

As usual with .NET assemblies you have the choice of strong-naming your assembly and installing it in the GAC, or of not strong-naming it and including it in a local path. If you have strong-named your assembly and installed it in the GAC all you need to do is bring up a Visual Studio 2005 command prompt and run:

regasm DotNetLibrary.dll

If you have not strong-named your assembly you need to tell regasm.exe where it is so that it can find it to register it. To do this you need to run the command below, where c:\AccessDotNet is the path where DotNetLibrary.dll can be found. This works fine, although it will warn you that you should really strong-name your assembly:

regasm /codebase c:\AccessDotNet\DotNetLibrary.dll

Note that you can unregister an assembly with the /u option of regasm.

For more detail on this see

Debugging into .NET from Access
You may want to debug from Access into your class library. To do this:

1. Using Visual Studio 2005 bring up the Properties window for the class library.

2. Go to the Debug tab and select the ‘Start external program’ option under ‘Start Action’. In the textbox alongside enter the full path including file name to MSAccess.exe for the version of Access you are using (usually in Program Files/Microsoft Office/Office or similar).

3. On the same Debug tab under ‘Command line arguments’ enter the full path including file name to your test database (the .mdb file, or .accdb if you are using Access 2007). Once you’re done it should something like below:

Project Properties for Access

4. Now put a breakpoint in the code (in our example the sensible place is in method DotNetMethod) and hit F5 in the .NET project. The .NET code should compile and Access should start with your database opened. If you now run the VBA code to call the .NET library again, as above, you should find that the code will break at the breakpoint you set in the .NET code.


The original Excel article is currently the most popular article on this blog. If anyone has any feedback on further COM interop topics they would like to see covered please post a comment. Possible topics include marshalling, interface types, or going the other way (calling Excel or Access from .NET).


Index page from MSDN

More on COM Interop from COM clients into .NET:

A COM Class Wizard for C#

Guidelines for COM Interoperability from .NET

In Defense of regasm /codebase

Excel/.NET versioning problems

Dependency Injection and the Composite Application Block (Introduction to CAB/SCSF Part 5)


In part 1 of this series of articles I described a simple CAB application. This had three Windows Application C# projects with no references to each other. In spite of this, with some very simple code we could get all three to launch their individual screens. That very simple application didn’t have the projects interacting in any other way, however.

Part 2 of the series described WorkItems, which can be thought of as containers for code, and how we could add a WorkItem to each of our projects in a hierarchy.

Part 3 introduced dependency injection as a way of structuring our code so that our class structure was loosely coupled and behaviour could be easily changed by changing which class was ‘injected’ into another.

In this article I will bring all of these ideas together and explain how dependency injection works in the CAB.

The Problem

We want to get our three projects from part 1 (Red, Blue and Shell) to interact with each other without having them reference each other. As discussed in part 2, WorkItems are designed to allow us to do this: we can put a WorkItem in each project, put code into their various collections, share the WorkItems and thus share the code.

But how does one project know about the WorkItem from another project? Bear in mind that there are no direct references between the projects. This could be done manually in code using reflection, of course. But the CAB framework gives us a much cleaner way to do this.

Dependency Injection and the CAB

The answer is we can use dependency injection to inject a WorkItem from one project or ‘module’ into another.

This is clearly an appropriate thing to do here: we want loose coupling between our modules and flexibility to change how they interact. As I’ve discussed in another article, in extreme cases we might have different development teams responsible for different modules, with different release cycles. Using dependency injection one team could change a class that’s injected and thus change the behaviour of another module without that module needing to be re-released.

However, unlike in my example in part 3, dependency injection in the CAB doesn’t use configuration files to specify which class should be used. Instead attributes are used to tell the code that a dependency needs to be injected.


This is most easily seen with an example. We have already seen that a root WorkItem is created in our CAB application at start up. We have also seen that all modules listed in the ProfileCatalog.xml file will get loaded at the start up of a CAB application, and that a Load() method in a ModuleInit class gets called in each module.

We want a reference to the root WorkItem in a module that is not the shell. We can achieve this by putting a setter for a WorkItem in our ModuleInit class for the module, along with an attribute:

        private WorkItem parentWorkItem;
        public WorkItem ParentWorkItem
            set { parentWorkItem = value; }

As you can see we decorate the setter with the attribute ‘ServiceDependency’. This tells the CAB framework that when it is loading this module it should look for an appropriate WorkItem to ‘inject’ into this setter.

If we put this code into the RedModuleInit class in our example, and put a breakpoint in the setter we can see that the root WorkItem is being passed into here at start up and stored in the parentWorkItem variable.

How is this Working (1)?

You may wonder how the CAB knows what to inject and where to inject it here. After all there may be multiple WorkItems in our project: which one should it choose? Furthermore we can inject different types (i.e. not WorkItems) in a similar way. If we have several instantiated classes of the same type how do we inject a specific one? And how does the CAB find the ServiceDependency attribute? Does it scan all classes in all modules?

I’m going to leave these issues for now: just accept that the root WorkItem gets injected in this case. I’ll return to this later in this article.

Red and Blue Forms Application

So we can get a reference to the root WorkItem as above. In our naïve CAB application from part 1 we’d quite like to tell the red and blue forms in the modules to load as MDI children into the shell form.

We can do this by firstly adding the shell form to the Items collection of the root WorkItem. Then if the root WorkItem is available in our Red and Blue projects we can access the shell form through the Items collection.

There’s an AfterShellCreated event of the FormShellApplication class that we can override in our program class to add the shell form to the Items collection:

    public class Program : FormShellApplication<WorkItem, Form1>
        static void Main()
            new Program().Run();
        protected override void AfterShellCreated()
            this.Shell.IsMdiContainer = true;
            RootWorkItem.Items.Add(this.Shell, "Shell");

Note that the shell gets a name in the Items collection (“Shell”). Note also that we’re making the shell form into an MDIContainer here, accessing it via the Shell property of the FormShellApplication class.

In the Load method of our modules we can now retrieve the shell form and set it to be the MDIParent of our red and blue forms. So our ModuleInit class looks as below:

    public class RedModuleInit : ModuleInit
        private WorkItem parentWorkItem;
        public WorkItem ParentWorkItem
            set { parentWorkItem = value; }
        public override void Load()
            Form shell = (Form)parentWorkItem.Items["Shell"];
            Form1 form = new Form1();
            form.MdiParent = shell;

If we now run the application our red and blue forms will appear as MDI children of the main shell.

The code for this is available. By the way you should know that there are better ways of setting up an MDI application in the CAB: this example is intended to just show the basic concepts of dependency injection.

How is this Working (2)?

Earlier in this article I posed several questions about how all this could be working. I’ll attempt to answer those questions now.

As discussed earlier, WorkItems are generic containers for code to be passed between modules, and are capable of being arranged in a hierarchy. But in addition they are actually ‘Inversion of Control containers’ or ‘Dependency Injection containers’. I mentioned these in part 4 of this series of articles. However, I’ve rather glossed over them up until now. Note that both Spring and PicoContainer use containers to control their dependency injection.

WorkItems as Dependency Injection Containers

These containers work in the CAB as follows. Suppose we want to inject object A into object B. The dependency injection only happens when object B is added into an appropriate collection on a WorkItem. This can be on creation of the object if we create object B with the AddNew method, or it can happen with an existing object if we use the Add method to add it to a WorkItem collection.

Furthermore normally the injection can only work if object A is already in an appropriate collection of the same WorkItem. The exception is if we are using the ‘CreateNew’ attribute (see below). In this case object A will be created and added to the Items collection of the WorkItem before being injected.

As you can see, in a way dependency injection in the CAB is ‘scoped’ to a WorkItem.

Types of Dependency Injection in the CAB

There are three attributes that can be attached to setters and used for dependency injection in the CAB:

  1. ComponentDependency(string Id)
    This attribute can be used to inject any object that already exists in a WorkItem’s Items collection. However, because we can have multiple objects of the same type in this collection we have to know the ID of the item we want to inject (which is a string). We can specify an ID when we add our object into the collection. If we don’t specify an ID the CAB assigns a random GUID to the item as an ID. Note that if the object does not exist in the appropriate Items collection when we try to inject it then the CAB will throw a DependencyMissingException.
  2. ServiceDependency
    We’ve seen this attribute already. An object must be in the WorkItem’s Services collection to be injected using this attribute. The Services collection can only contain one object of any given type, which means that the type of the setter specifies the object uniquely without the need for an ID. I will discuss Services further in part 6 of this series of articles.
  3. CreateNew
    A new object of the appropriate type will be created and injected if this attribute is attached to a setter. The new object will be added to the WorkItem’s Items collection.

As usual this is best seen with an example.


We set up a CAB project with two component classes. Component1 is just an empty class, whilst Component2 has two private Component1 member variables that will be injected. One will be injected by name (and so needs to be created and added to the WorkItem’s Items collection prior to injection). One will be injected by being created:

    public class Component2
        private Component1 component11;
        public Component1 Component11
            set { component11 = value; }
        private Component1 component12;
        public Component1 Component12
            set { component12 = value; }

To use this we put the following code in the AfterShellCreated method of our FormShellApplication class:

        protected override void AfterShellCreated()
            Component2 component2 = new Component2();

Notice the syntax of the AddNew command for the Items collection. It’s a generic method. Remember that a generic is simply a way of providing a type (in this case a class) at runtime. Here we are providing the type “Component1” to the AddNew generic method. A generic method can do anything it likes with the type provided. Here AddNew will instantiate that type and add it to the items collection.

As you can see, we create a Component1 object with ID “FirstComponent1” and add it to the Items collection. We then create a Component2 object using the ‘new’ keyword. We would usually do this using AddNew, but I want to demonstrate that we don’t have to do this. Next we add the Component2 object to the Items collection.

At this point the “FirstComponent1” object will be injected into component2 in the setter marked with the “ComponentDependency” attribute. Also another Component1 object will be created and injected into component2 in the setter marked with the “CreateNew” attribute.

Finally in this code we call a routine called DisplayRootItemsCollection:

        private void DisplayRootItemsCollection()
            Microsoft.Practices.CompositeUI.Collections.ManagedObjectCollection<object> coll = RootWorkItem.Items;
            foreach (System.Collections.Generic.KeyValuePair<string, object> o in coll)

This just dumps out all the objects in the Items collection to the debug window. The results are as below:

[4e0f206b-b27e-4017-a1b2-862f952686da, Microsoft.Practices.CompositeUI.State]
[14a0b6a2-12a4-4904-8148-c65802af763d, Shell.Form1, Text: Form1]
[FirstComponent1, Shell.Component1]
[4c7e0a20-90b7-42c6-8912-44ecba40523f, Shell.Component2]
[c40a4626-47e7-4324-876a-6bf0bf99c754, Shell.Component1]

As you can see we’ve got two Component1 items as expected, one with ID “FirstComponent1” and one with ID a GUID. And we have one Component2 item as expected. We can also see that the shell form is added to the Items collection, as well as a State object.

The code for this is available, and if you single-step through it you can see the two Component1 objects being injected into component2.

Where Was All This in the Original Example?

Note that in the original example in this article the root WorkItem was injected into a ModuleInit class apparently without the ModuleInit class being added to any WorkItem. This seems to contradict the paragraphs above that say that we can only inject into objects that are put into WorkItems. However, the CAB framework automatically adds ModuleInit classes into the root WorkItem when it creates a module, so we don’t need to explicitly add them ourselves for the dependency injection to work.

Futhermore, the root WorkItem that was injected as a ServiceDependency even though it had not been explicitly added to any Services collection. Again this seems to contradict the statements above that any object being injected must be in an appropriate collection. But the code works because any WorkItem is automatically a member of its own Services collection.

You can see this if you download and run this example. It is an extension of the original example that allows you to output both the Items collection and the Services collection to the output window via a menu option. If you do this after the application has loaded you get the output below:

[336ad842-e365-47dd-8a52-215b951ff2d1, Microsoft.Practices.CompositeUI.State]
[185a6eb5-3685-4fa7-a6ee-fc350c7e75c4, Shell.Form1, Text: Form1]
[10d63e89-4af8-4b0d-919f-565a8a952aa9, Shell.MyComponent]
[Shell, Shell.Form1, Text: Form1]
[21ac50d7-3f22-4560-a433-610da21c23ab, Blue.BlueModuleInit]
[e66dee6e-48fb-47f0-b48e-b0eebbf4e31b, Red.RedModuleInit]
[Microsoft.Practices.CompositeUI.WorkItem, Microsoft.Practices.CompositeUI.WorkItem]
…(Complete list truncated to save space)

You can see that both the BlueModuleInit and RedModuleInit objects are in the Items collection in spite of not being explicitly added by user code, and the WorkItem is in the Services collection.


To understand and use the Composite Application Block you don’t need to understand in detail its underlying code. It’s intended to be used as a framework after all. However, it’s useful to know that the dependency injection here is all done by the ObjectBuilder component.

When we call AddNew or Add on a collection of a WorkItem it’s the ObjectBuilder that looks at the dependency attributes on the class we’re adding and injects the appropriate objects.

The ObjectBuilder is a ‘builder’ in the classic design patterns sense. The builder pattern ‘separates the construction of a complex object from its representation so that the same construction process can create different representations’.

Note that this pattern is often called a ‘factory pattern’, although factories in the Gang of Four ‘Design Patterns’ book are slightly different things (we’re not creating families of objects (Abstract Factory) or ‘letting ‘the subclasses decide which class to instantiate’ (Factory Method)).

WorkItems in a Hierarchy and Dependency Injection of Items

As discussed previously, one of the strengths of WorkItems is that multiple instances can be instantiated in different modules, and they can all be arranged in a hierarchy. This is because each WorkItem has a WorkItems collection. However, you should be aware that dependency injection only works for items in the current WorkItem. If you attempt to inject an object in a different WorkItem in the hierarchy into an object in your WorkItem you will get a DependencyMissingException.

We can see this by modifying the AfterShellCreated event of our FormShellApplication in the example using Component1 and Component2 above:

        WorkItem testWorkItem = null;
        protected override void AfterShellCreated()
            testWorkItem = RootWorkItem.WorkItems.AddNew<WorkItem>();
            // The next line throws an exception as the testWorkItem
            // container doesn't know about FirstComponent1, and Component2
            // is asking for it to be injected.

Here we add a new WorkItem to our RootWorkItem. We add an instance of Component1 with ID “FirstComponent1” to our RootWorkItem as before. Then we add an instance of Component2 to our testWorkItem.

Remember that Component2 asks for a Component1 object with ID “FirstComponent1” to be injected when it is created. Because the testWorkItem knows nothing about such an object we get an exception thrown.

We can fix the code by adding our Component1 into the testWorkItem instead of the RootWorkItem:


The code for this example is available.

WorkItems in a Hierarchy and Dependency Injection of Services

Services behave differently to the example given above.

We can make Component1 a service by adding it to the Services collection of the RootWorkItem instead of the Items collection, and telling Component2 it’s a ServiceDependency and not a ComponentDependency. Then the code will work. This is because the CAB includes a service locator that looks in all parent WorkItems of the current WorkItem to see if a given service is available. I will discuss this in more detail in part 6.


Dependency injection in the CAB is a powerful tool. It enables us to share code between modules in a loosely-coupled way.

In part 6 of this series of articles I discuss how we can use the CAB to do constructor injection. Part 7 of the series will investigate the Services collection of a WorkItem in some detail.

Random Numbers Problem

Another problem doing the rounds relates to random numbers. Unlike my previous post, this one might make a (difficult) interview question:

Given a function that generates a integer random number between 1 and 5, write another function that generates an integer random number between 1 and 7.

Obviously you are intended to use the first function for the random element in your solution, not the Random class.

I have been unable to come up with an exact solution that takes finite time for this. I have written some C# code that either gives an exact solution but can theoretically take infinite time, or gives an inexact solution (although to any level of accuracy you like).

You can download my attempted solutions.

An Aside on Inversion of Control, Dependency Inversion and Dependency Injection (Introduction to CAB/SCSF Part 4)


In part 3 of this series of articles I discussed dependency injection in general terms. To understand what the CAB is doing for you it’s important to have an understanding of dependency injection, and I will be talking more about it in part 5.

This short article is something of an aside however, and is not critical for an understanding of the CAB. Here I will discuss two concepts that sound similar: inversion of control (‘IoC’) and dependency inversion. I will also discuss how both relate to dependency injection.

‘Inversion of Control’

‘Inversion of Control’ is currently something that everyone agrees is a good thing, even though no-one seems to be able to agree exactly what it is. For example, on Wikipedia there’s no definition of Inversion of Control, only an admission that we can’t define it.

Inversion of control is closely related to dependency injection, as I will describe below, and is often used synonymously with it. However, it has a wider meaning, and is arguably not strictly accurate when applied to dependency injection as an abstract concept. Martin Fowler discusses inversion of control at length, but in his article on dependency injection decides to avoid the term.

Inversion of Control in Relation to Frameworks

The conventional definition of inversion of control relates to frameworks and code re-use. Normally to re-use someone else’s code you would call into a library. You do this all the time in the .NET framework. For example, if you call Math.Tan() you are using someone else’s code, but you make the call and you have control.

However, there are times using .NET when the framework calls you back. An example is when you write a custom array sort algorithm using the IComparable or IComparer interfaces. Another is when you implement a custom enumerator by implementing IEnumerable on a collection class. In these cases the usual direction of control is inverted: something else is calling your code, rather than you calling something else.


If we implement IComparable on a class we have to write a method called CompareTo(). This defines when one object of the class’ type is bigger than another. Then when we call Array.Sort on an array of objects of this class the framework itself calls our routine to sort the objects according to our definition.

I’ve written a simple example to illustrate this.

Here the .NET framework is calling my code, and as a result I have to write it with a specific method signature – int CompareTo(object obj). I don’t have direct control over when this call is made. We can think of this as an ‘inversion’ of ‘control’ from the Math.Tan example.

For obvious reasons, the inversion of control concepts described above are often called the ‘Hollywood Principle’, or ‘don’t call us we’ll call you’.


Inversion of control is discussed in relation to frameworks in ‘Design Patterns: Elements of Reusable Object-Oriented Software’ by Gamma, Helm, Johnson and Vlissides (also known as the ‘Gang of Four’ book). They summarize quite nicely:

“Frameworks emphasize ‘design reuse’… Reuse on this level leads to an inversion of control between the application and the software on which it’s based. When you use a toolkit (or a conventional subroutine library for that matter), you write the main body of the application and call the code you want to reuse. When you use a framework, you reuse the main body and write the code it calls. You’ll have to write operations with particular names and calling conventions, but that reduces the design decisions you have to make.”

Inversion of Control and Dependency Injection

So how does inversion of control relate to dependency injection? At first glance the concepts above and my examples in the previous article have little in common. Yet the two terms are often used synonymously. Indeed I have some course notes from a major computer trainer that actually says ‘IoC and dependency injection are terms that mean the same thing’.

The answer is that dependency injection is usually done via a framework of some kind. I will discuss this more in part 5, but typically you define your classes and then tell the framework to ‘inject’ them into other classes in some way. The framework is then calling back your code to do the injection, and we have inversion of control as described above.

Inversion of Control = Dependency Injection?

As I’ve discussed, dependency injection is just one very specific example of how inversion of control can be used. As a result it is probably wrong to treat IoC and dependency injection as terms that mean the same thing. IoC is a wider concept.

However, in spite of it not strictly being accurate, when people talk about ‘IoC Containers’ and ‘IoC Frameworks’ what they usually mean are containers or frameworks that do dependency injection.

Inversion of Control and the CAB

The Composite Application Block really is an inversion of control framework in all senses by the way. It allows us to do dependency injection, as I’ll describe below (eventually). It also is often calling us rather than us calling it. An example of this is the call to the ‘Load() method in a ModuleInit class at start up that we saw in Part 1 of this series of articles. We have to just know that method will be called when the module starts, and code to the method signature.

Dependency Inversion

A related concept that causes further confusion is dependency inversion. Once again dependency inversion is a wider concept that is used in dependency injection. The aim of dependency inversion is to prevent high-level classes directly depending on lower-level classes and thus introducing tight-coupling between them. Instead we get both sets of classes to depend on interfaces.

Dependency Inversion in the Example from Part 3

Consider my example of a main class and dependent classes from part 3. If you were writing this example in a ‘traditional’ way you might have the client code (class Program) create the MainClass which in turn would decide which dependent class it needed and instantiate it. The dependencies between classes would look something like this:

Direct Dependency Class Diagram

As we know, direct dependencies between classes are in general a bad thing as they make it harder to change the code in the dependent classes. So in our dependency injection pattern we introduce a specific interface that all our classes depend on in some sense:

Dependency Injection Class Diagram

Now, as discussed previously, the code is less brittle as we can change the dependent classes without worrying too much about breaking MainClass, as long as we don’t change the interface.

Dependency Inversion and Inversion of Control

We have inverted the dependencies for the dependent classes here. Previously they were being referred to directly by MainClass. Now nothing refers to them directly, instead they refer to the interface. The arrows from the dependent classes now point up instead of down.

Note that this is NOT inversion of control. We are inverting dependencies between classes. This is not an example of a framework calling the code rather than the code calling the framework. In both cases here ultimately MainClass calls code that runs in a dependent class.


After the fairly lengthy discussion of dependency injection and inversion of control in the last two parts of this article I will return to discussing the CAB in part 5.


Many code design books make a clear distinction between the dependency inversion I have described in this section and full ‘inversion of control’ (the Hollywood principle). For an example see the excellent ‘Head First Design Patterns’ book:

As mentioned, Wikipedia is as confused as everyone else about IoC:

Martin Fowler is of course excellent on the subject:

The Gang of Four book discusses Inversion of Control in a section on frameworks:

Some Thoughts on SOA and Application Design using the CAB/SCSF


Working with Microsoft’s composite application block framework (CAB) has made me think about application design in a SOA (Service Oriented Architecture) environment. This article is a few thoughts on how we might use the CAB to solve some of the problems associated with creating composite user interfaces in such a service oriented environment.

Current Design

I am responsible for a trading and risk management application at an investment bank. The application deals with credit derivatives. Our application is currently a pretty typical layered monolith, designed as below:

Current System Architecture

As you can see we have:

  • A database management component that does connection management to our database
  • A data access layer that contains the SQL for connection to our database
  • A model layer that contains all the business logic
  • A presentation layer that is intended to be a lightweight set of screens
  • Some vertical components that give us common error handling and utilities etc.
  • A set of ‘business entities’ that are data classes that are used to pass data around between the tiers

We have service interface layers (sets of interfaces) separating the layers. You can’t use the data access or model components without going through these service interface layers.

Limitations of the Current Design: Monolithic and not Service-Oriented

This design isn’t very service-oriented. This is largely because when we wrote it we didn’t have many services available that we could plug in to. Most of our interaction with other systems is via flat files FTP’d to or from us overnight. This isn’t ideal, and we’re going to change it.

The application is also a monolith because each of these components covers the whole gamut of system functionality. So we have written, for example, trade entry screens for our products, curve management logic for products, and pricing code for our products. This is inefficient because it duplicates what other development teams are doing.

Business Functionality in the Examples

To simplify these examples I’ve assumed the system only has system functionality in three categories: static/market data management, trade management, and pricing/risk. In the real world things are a little more complex than that.

For those of you that don’t work in banking:

  • ‘Static data’ in credit derivatives is things like details of companies, details of the bonds they have issued, and other underlying data to the business, such as lists of currencies, industry sectors, credit ratings, countries etc.
  • ‘Market data management’ is interest rates and credit spreads (‘curves’), and correlations, and associated mathematics
  • ‘Trade management’ deals with details of our trades executed in the market, and handling of booking and associated workflow
  • ‘Pricing/risk’ takes the trades and market data and works out values associated with the trades

Approach to Changing the Design

We are currently considering the possibility of turning the above design into a vertically layered application with each component the responsibility of a separate team of developers. Each team will be responsible for the entire vertical stream. So for example the static/market data team would provide the data access, the business logic AND the GUI components for all static and market data management.

The user interface components will then sit in a composite application smart client alongside the other teams’ user interface components. Functionality that is needed by other teams will be exposed as CAB services on the client via interface components. We will use the basic SCSF design, with interface components for each vertical stream being the only thing that needs to be referenced if you want to access the functionality.

New Design

The end result would look something like this:

Future System Architecture

This can be further clarified with an example. There will be some curve screens in the GUI. These will be the responsibility of the static and market data management team. This team will also be responsible for the entire infrastructure in getting these curves to display (the business and data access logic). Similarly there will be some pricing screens in the same GUI. These will be the responsibility of the pricing and risk team in the same way.

Because of the way composite applications are structured in the composite application block it should be possible, for example, to release curve enhancements separately from pricing enhancements (and still have the user interface work).

Interfaces in the CAB/SCSF

I should probably make it clear here what I mean by ‘interfaces’ here. I do mean C# interfaces as a starting point: that is, lists of method signatures that can be called. But the CAB also expects you to declare events and commands in the interface components. These allow for looser coupling between components. One advantage of this design is that you can as tightly or loosely couple components as you want. You can even tightly couple some interactions and loosely couple others within a component. For example, if you are requesting and then manipulating a set of complex business objects through the interface you’d probably want a full C# interface. If you’re just telling another component to perform a simple action (say price a trade with current market data) you might do that with a command.

Interaction Between Vertical Layers: Integrated Front-End

In our example above, the pricing component will need to use the curves when it actually does some pricing. It will clearly do this through the interfaces defined by the static and market data team, usually server-side. However, one advantage of this design could be to have the pricing component actually get the data the user is looking at on the client and submit it with a pricing request.

The example I’m going to use here is what happens if a user has changed a curve locally to some extreme values and wants to reprice their book using it. However, the user doesn’t want to actually save that curve so other people can see it.

Clearly what we ideally want is the ability for our user to change the data and then just hit a ‘reprice’ button. A composite smart client application of the kind described here gives us the chance to do that, since we can handle the interaction client-side. The pricing component could have a ‘price using local data’ button that would request the data from the client-side market data component via the service interface and could then submit it with it’s pricing request.

Non-Integrated Front Ends

Most n- tier SOA designs really struggle with these ‘what-if’ scenarios, and consequently the problem of giving our users an integrated experience. More often than not you simply can’t change a curve and reprice your book. Sometimes you can do this but you have to go to another application, change the curve and save it some temporary state, and then go to the risk application and tell it to use that curve. Sometimes you can do it, but only via a spreadsheet download that you’ve had to write yourself.

The reason for this is that it’s traditionally been quite difficult to build integrated front-ends to a series of services, and often we haven’t tried too hard. It’s far easier for the curve team to write their own standalone curve GUI and let the pricing team worry about the pricing problems.

Client-side SOA??

We could almost go as far as having ALL interaction for the smart client components happening on the client via these interfaces. Suppose, for example, the trade management component needs to use a list of currencies. It could make a request to the static data service via the CAB interface on the client. The static data service can then go to its store via a web service or EMS or any other means. It can also cache the results for future calls. The trade management component no longer has to worry about connectivity to the static data component’s web service or servers; it’s all handled for it.

This would mean our SOA would be about client-side interfaces and services, rather than web service interfaces (or some server-side equivalent). The programming model in the client would be hugely simplified. To get and use static or market data all the pricing team have to do is reference the appropriate interface and call it using standard C#. There’s no need for web service plumbing and handling the return types.

Interactivity on the client would be improved too: for example the static data service could maintain a client-side cache of static data, responding to requests for data from it from the other components as necessary. We’ve reduced server trips for our client code substantially. Our interfaces can in theory be ‘chattier’ without too much penalty.


There are a couple of issues I can think of with this:

  1. Clearly the more interaction we have between our client-side components the more tightly coupled they become. Versioning of the interfaces will become a problem, though possibly less of a problem than web service versioning, since these interfaces are only being used directly by a limited number of clients.
  2. We’re writing an interface that can only be consumed in one specific way (in the smart client framework), and may well want web service interfaces on our components anyway to allow server-side calls, calls from other platforms etc. So we may be making support of our components more difficult rather than easier.

However, in many ways where I work we already have this model in place: many of our analytics components run client-side but will also handle connecting to back-end services and getting curve and static data for you.


At the moment these are just ideas that we are considering. Obviously smart clients are deeply unfashionable in a world where Ajax is the current GUI silver bullet. But we’ve repeatedly seen teams struggle to create even half-decent trading applications in browsers, and we don’t see that changing in the immediate future. Possibly something like this may be a more practical way to proceed.