Rich Newman

February 7, 2012

Delegate Syntax in C# for Beginners

Filed under: .net, beginners guide, c#, code syntax, delegate — Tags: , , , , — richnewman @ 3:48 am


I have been programming with C# since it came out but I still find the delegate syntax confusing.  This is at least partially because Microsoft have changed the recommended syntax regularly over the years.  This article is a quick recap of the various syntaxes.  It also looks at some of the issues with using them in practice.  It’s worth knowing about all the various syntaxes as you will almost certainly see all of them used.

This article is just a recap: it assumes that you know what a delegate is and why you’d want to use one.

.Net and Visual Studio Versions

The first thing to note is that you can use any of these syntaxes as long as you are using Visual Studio 2008 or later and targeting .Net 2.0 or later.

Named methods were available in .Net 1.0, anonymous methods were introduced in .Net 2.0, and lambda expressions were introduced in .Net 3.0.  However, like much of .Net 3.0, which is based on the .Net 2.0 assemblies, lambda expressions will compile to .Net 2.0 assuming you have the appropriate version of Visual Studio.

Note also that lambda expressions can do (almost) everything anonymous methods can do, and effectively supersede them as the preferred way of writing inline delegate code.


A listing of the code for this article is availableThe complete working program is also available.

The Delegate

For all of these examples we need a delegate definition.  We’ll use the one below initially.

        private delegate void TestDel(string s);

Named Methods

Named methods are perhaps the easiest delegate syntax to understand intuitively.  A delegate is a typesafe method pointer.  So we define a method:

        private void Test(string s)

Now we create an instance of our method pointer (the delegate above) and point it at our method.  Then we can call our method by invoking the delegate.  The code below prints out ‘Hello World 1’.  This is easy enough, but all a little cumbersome.

            TestDel td = new TestDel(Test);
            td("Hello World 1");

There’s one slight simplification we can use.  Instead of having to explicitly instantiate our delegate with the new keyword we can simply point the delegate directly at the method, as shown below.  This syntax does exactly the same thing as the syntax above, only (maybe) it’s slightly clearer.

            TestDel td2 = Test;
            td2("Hello World 2");

There is an MSDN page on named methods.

Anonymous Methods

The anonymous method syntax was introduced to avoid the need to create a separate method.  We just create the method in the same place we create the delegate.  We use the ‘delegate’ keyword as below.

            TestDel td3 = 
                delegate(string s)
            td3("Hello World 3");

Now when we invoke td3 (in the last line) the code between the curly braces executes.

One advantage of this syntax is that we can capture a local variable in the calling method without explicitly passing it into our new method.  We can form a closure.  Since in this example we don’t need to pass our string in as a parameter we use a different delegate:

        private delegate void TestDelNoParams();

We can use this as below.  Note that the message variable is not explicitly passed into our new method, but can nevertheless be used.

            string message = "Hello World 4";
            TestDelNoParams td4 = 

There is an MSDN page on anonymous methods.

Lambda Expressions

Lambda expressions were primarily introduced to support Linq, but they can be used with delegates in a very similar way to anonymous methods.

There are two basic sorts of lambda expressions.  The first type is an expression lambda.  This can only have one statement (an expression) in its method.  The syntax is below.

            TestDel td5 =  s => Console.WriteLine(s);
            td5("Hello World 5");

The second type is a statement lambda: this can have multiple statements in its method as below.

            string message2 = "Hello World 8";
            TestDel td6 =
                s => 
                    Console.WriteLine("Hello World 7");
            td6("Hello World 6");

Note that this example also shows a local variable being captured (a closure being created).  We can also capture variables with expression lambdas.

There is an MSDN page on lambda expressions.

Return Values

Nearly all of the examples above can be extended in a simple way to return a value.  The exception is expression lambda which cannot return a value. Doing this is usually an obvious change: we change our delegate signature so that the method it points to returns a value, and then we simply change the method definition to return a value as usual.  For example the statement lambda example above becomes as below.  The invocation of tdr6 now returns “Hello ” + message2, which we write to the console after the invocation returns:

            string message2 = "World 8";
            TestDelReturn tdr6 =
                s =>
                    Console.WriteLine("Hello World 7");
                    return "Hello " + message2;
            Console.WriteLine(tdr6("Hello World 6"));

The full list of all the examples above modified to return a value can be seen in the code listing in the method ExamplesWithReturnValues.


All of these syntaxes can be used to set up a method to be called when an event fires.  To add a delegate instance to an event we used the ‘+=’ syntax of course.  Suppose we define an event of type TestDel:

        private event TestDel TestDelEventHandler;

We can add a delegate instance to this event using any of the syntaxes in an obvious way.  For example, to use a statement lambda the syntax is below.  This looks a little odd, but certainly makes it easier to set up and understand event handling code.

            TestDelEventHandler += s => { Console.WriteLine(s); };
            TestDelEventHandler("Hello World 24");

Examples of setting up events using any of the syntaxes above can be found in the code listing.

Passing Delegates into Methods as Parameters: Basic Case

Similarly all of the syntaxes can be used to pass a delegate into a method, which again gives some odd-looking syntax.  Suppose we have a method as below that takes a delegate as a parameter.

        private void CallTestDel(TestDel testDel)
            testDel("Hello World 30");

Then all of the syntaxes below are valid:

            CallTestDel(new TestDel(Test));  // Named method
            CallTestDel(Test);               // Simplified named method
            CallTestDel(delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallTestDel(s => Console.WriteLine(s));  // Expression lambda
            CallTestDel(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); });  // Statement lambda

Passing Delegates into Methods as Parameters: When You Actually Need a Type of ‘Delegate’

Now suppose we have a method as below that expects a parameter of type Delegate.

        private void CallDelegate(Delegate del)
            del.DynamicInvoke(new object[] { "Hello World 31" });

The Delegate class is the base class for all delegates, so we can pass any delegate into CallDelegate.  However, because the base Delegate class doesn’t know the method signature of the delegate we can’t call Invoke with the correct parameters on the Delegate instance.  Instead we call DynamicInvoke with an object[] array of parameters as shown.

Note that there are some methods that take Delegate as a parameter in the framework (e.g. BeginInvoke on a WPF Dispatcher object).

There’s a slightly unobvious change to the ‘Basic Case’ syntax above if we want to call this method using the anonymous method or lambda expression syntax.  The code below for calling CallDelegate with an expression lambda does NOT work.

            CallDelegate(s => Console.WriteLine(s));  // Expression lambda

The reason is that the compiler needs to create a delegate of an appropriate type, cast it to the base Delegate type, and pass it into the method.  However, it has no idea what type of delegate to create.

To fix this we need to tell the compiler what type of delegate to create (TestDel in this example).  We can do this with the usual casting syntax (and a few more parentheses) as shown below.

            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda

This looks a little strange as we don’t normally need a cast when assigning a derived type to a base type, and in any case we’re apparently casting to a different type to the type the method call needs.  However, this syntax is simply to tell the compiler what type of delegate to create in the first place: the cast to the base type is still implicit.

We need to do this for any of the syntaxes apart from the very basic named method syntax (where we’re explicitly creating the correct delegate):

            CallDelegate(new TestDel(Test));  // Named method
            CallDelegate((TestDel)Test);      // Simplified named method
            CallDelegate((TestDel)delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda
            CallDelegate((TestDel)(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); }));  // Statement lambda


There is one further simplification that we can use in the examples in this article.  Instead of defining our own delegates (TestDel etc.) we can use the more generic Action and Func delegates provided in the framework.  So, for example, everywhere we use TestDel, which takes a string and returns void, we could use Action<string> instead, since it has the same signature.

February 5, 2012

Why Some Password Security is a Waste of Time

Filed under: password, security — Tags: , — richnewman @ 11:53 pm


This is very off-topic, but a recent MSDN article and a paper it referenced got me thinking about password security in our organization.  If my maths is right, the costs of the way we do this are huge.

Changing Passwords Monthly

I work for a very large bank (it has about 300,000 employees, or did have before the banking crisis).  Until recently this bank forced us to change our passwords monthly.  We have two passwords: Windows and ‘single sign on’.  These are the internal passwords we use to do our jobs.  The Windows one is used to log on to Windows obviously.  The single sign on password is to access almost any other internal resource: the timesheet system, the project management system, the issue tracking system, the performance management system etc. etc.

So I had to change both these passwords every month.  Let’s say that on average I can invent a new password, commit it to memory, and enter the old one and the new one twice in 30 seconds, allowing for getting it wrong occasionally.  If all 300,000 employees spend that long changing their two passwords monthly I reckon we spent roughly 35 working years per annum on this (2 x 0.5 x 12 x 300000 / (60 x 7 x 240)).

Internet Companies DONT Make You Change Your Password

Now, I have a number of online bank accounts and none of them expect me to change my password regularly.  Nor do any of the shopping sites that have my credit card details.  The reason for this is that if someone gets hold of my password it really doesn’t matter if I’m forced to change it a week later.  The thief is going to use it straightaway if they are going to use it at all.  The security controls need to prevent them getting hold of the password in the first place.

So why do it for passwords in a big organization?  There are some reasons I can think of, but are they worth the cost?

To be fair, the bank has realized this and reduced the frequency with which passwords have to be changed to 90 days.  This obviously cuts the cost by a factor of three so we now only spend about 12 working years per annum on this.  However, my personal opinion is that this is a control that could be removed completely.

Passwords for Every Application with Timeouts

Another bugbear is that our ‘single sign on’is far from ‘single’.  Every application we use forces us to enter it separately, and they are all set to time out after a short period of inactivity, not exceeding 30 minutes.  This is mandatory as part of our security policy.  Because this password is used for all our internal systems we all log into them frequently.  I estimate I enter this password about 10 times a day, and I expect that isn’t far from the average for the organization as a whole.

The estimated cost if everyone is doing this, assuming it takes me 15 seconds to enter my password (including periodically mistyping the mandatory capital letter) is about 1800 working years per annum (10 x 15 x 240 x 300000 / (60 x 60 x 7 x 240)).  Ouch.

So our organization spends 1800 working years per annum just logging in to systems.  This is a global organization, so it’s hard to know what rate to use to work out the cost of that.  However, even at the federal minimum wage of $7.25 per hour that’s $22 million.  I suspect an accurate fully-loaded cost would be several times that.

There is some momentum for changing this, at least in our group, since the benefits of kicking everyone out of an internal system after a few minutes of inactivity are even less clear than for password changing.


An organization with 300,000 employees changing two passwords monthly spends about 35 working years per annum on this activity.

The same organization with a security policy that compels every internal application to use a password-based login, and then logging everyone out after a short period of inactivity, spends about 1800 working years per annum on this activity.

These are large numbers and it’s not entirely clear that the cost justifies the saving in terms of more secure systems.

I’ll write about C# and derivatives again soon.

December 20, 2011

Blurry Text with Small Fonts in WPF

Filed under: .net, fonts, wpf — Tags: , , , — richnewman @ 2:58 am


The difficulties with text rendering in WPF have been well documented elsewhere.  However, every time I get a blurry button I can’t remember how to fix it.  This brief post shows the usual solutions.


With small font sizes the default WPF text rendering can lead to text looking very blurry.  This is particularly a problem on controls (e.g. text on buttons), where we often use small fonts.

In .Net 4 Microsoft finally gave us a solution to this problem, which is to set TextOptions.TextFormattingMode = “Display” (instead of “Ideal”).  This snaps everything to the pixel grid.  However, this mode doesn’t look good at large text sizes, so there’s no easy solution to this problem.

Other TextOptions

Other TextOptions are

1.  TextHintingMode (Animated/Auto/Fixed)

This just tells the renderer to use smoother but less clear rendering for animated text.  It won’t fix the blurry text problem for static text (which should have Auto or Fixed set).

2.  TextRenderingMode (Aliased/Auto/ClearType/GrayScale).

Tells the renderer how to draw.  This DOES affect blurriness.  In particular using Aliased rather than the default Auto can be good at small sizes.

Effect of These Options

Below, with font size 12, the first line is the default (Ideal/Auto).  This is quite blurry.  The second line is Display/Auto (less blurry) and the third line is Display/Aliased (not at all blurry, but a bit jagged).  The second two lines are the options usually used to fix the problem.

This is the same but at font size 24, which highlights the problem since now Ideal/Auto (the first line) probably is ideal:

September 13, 2011

Review of a Composite Application Block Project

Filed under: .net, CAB, Composite Application Block, dotnet, Prism, Spring.Net — Tags: , , , — richnewman @ 4:59 am


In late 2007 I wrote a series of articles on Microsoft’s Composite Application Block (CAB).  At that time I was running a team that was developing a user interface framework that used the CAB.

We’re now four years on and that framework is widely used throughout our department.  There are currently modules from eleven different development teams in production.  There are modules that do trade booking, trade management, risk management, including real-time risk management, curve marking, other market data management, and so on.  All of those were written by different teams, yet it appears to the user that this is one application.

This article will look back at the goals, design decisions, and implementation history of the project.  It will look at what we did right, what we did wrong, and some of the limitations of the CAB itself (which apply equally to its successor, Prism).

The framework is a success.  However, it’s only a qualified success.  Ironically, as we shall see, it is only a qualified success because it has been so successful.  To put that less cryptically: many of the problems with the framework have only arisen because it’s been so widely adopted.

Hopefully this article will be of interest.  It isn’t the kind of thing I usually write about and will of course be a personal view:  I’m not going to pretend I’m totally unbiased.

Design Goals

Original Overall Goals

The project had two very simple goals originally:

  1. A single client application that a user (in this case, a trader) would use for everything they need to do.
  2. Multiple development teams able to easily contribute to this application, working independently of each other.

I suspect these are the aims of most CAB or Prism projects.

Do You Actually Need a Single Client Application?

An obvious question arising from these goals is why you would need an application of this kind.

Historically there have tended to be two approaches to building big and complex trading applications:

  1. The IT department will create one huge monolithic application.  One large development team will build it all.
  2. The IT department breaks the problem up and assigns smaller development teams to develop separate applications to do each part.  This is a much more common approach than 1/.

Both of these approaches work, and both mean you don’t need a client application of the kind we are discussing.  However, neither of these approaches works very well:

  • Monolithic applications quickly become difficult to maintain and difficult to release without major regression testing.
  • Equally users don’t like having to log into many different applications.  This is particularly true if those applications are built by the same department but all behave in different ways.  It can also be difficult to make separate applications communicate with each other, or share data, in a sensible way.

So there definitely is a case for trying to create something that fulfils our original design goals above and avoids these problems.   Having said that it’s clearly more important to actually deliver the underlying functionality.  If it’s in several separate applications that matters less than failing to deliver it altogether.

More Detailed Goals

For our project we also had some more detailed goals:

  • Ease of use for the developer.  I have personally been compelled to use some very unpleasant user interface frameworks and was keen that this should not be another one of those.
  • A standardized look and feel.  The user should feel this was one application, not several applications glued together in one window.
  • Standard re-usable components, in particular a standard grid and standard user controls.  The user controls should include such things as typeahead counterparty lookups, book lookups, and security lookups based on the organization’s standard repositories for this data.  That is, they should include business functionality.
  • Simple security (authentication and authorization) based on corporate standards.
  • Simple configuration, including saving user settings and layouts.
  • Simple deployment.  This should include individual development teams being able to deploy independently of other teams.

As I’ll discuss, it was some of the things that we left off that list came to back to haunt us later on.

Goals re Serverside Communication

A further goal was use of our strategic architecture serverside, in particular for trade management.  For example, we wanted components that would construct and send messages to our servers in a standard way.  I won’t discuss the success or failure of this goal in detail here as it’s a long and chequered story, and not strictly relevant to the CAB and the user interface framework.

Technical Design

Technical Design: Technologies

The technologies we used to build this application were:

  • Microsoft C# and Windows Forms
  • Microsoft’s Patterns and Practices Group’s Composite Application Block (the CAB)
  • DevExpress’ component suite
  • Tibco EMS and Gemstone’s Gemfire for serverside communication and caching

As I’ve already discussed, this document is going to focus purely on the clientside development.

In 2007 these were logical choices for a project of this kind.  I’ll discuss some of the more detailed design decisions in the sections below.

Things We Did (Fairly) Well

As I said this is a personal view: I’m not sure all our developers would agree that all of this was done well.

Ease of Use

Designing for ease of use is, of course, quite difficult.  We have done a number of things to make the project easy to use, some of which I’ll expand on below.  These include:

  • Developers write vanilla user controls.  There’s no need to implement special interfaces, inherit from base classes or use any complex design pattern.
  • Almost all core functionality is accessed through simple services that the developer just gets hold of and calls.  So for example to show your user control you get an instance of the menu service and call a show method.  We used singleton service locators so the services could be accessed without resorting to CAB dependency injection.
  • Good documentation freely available on a wiki
  • A standard onboarding process for new teams, including setting up a template module.  This module has a ‘hello world’ screen that shows the use of the menus and other basic functionality.

Developers Not Forced to Learn the Composite Application Block (CAB)

As mentioned above, one of the key goals of the project was simplicity of use.  The CAB is far from simple to use: I wrote a 25 part introductory blog article on it and still hadn’t covered it all.

As a result we took the decision early on that developers would not be compelled to use the CAB actually within their modules.  We were keen that developers would not have to learn the intricacies of the CAB, and in particular would not have to use the CAB’s rather clunky dependency injection in their code.

However, obviously we were using the CAB in our core framework.  This made it difficult to isolate our developers from the CAB completely:

  • As mentioned above we exposed functionality to the developers through CAB services.  However we gave them a simple service locator so they didn’t have to know anything about the CAB to use these services.
  • We also used some CAB events that developers would need to sink.  However since this involves decorating a public method with an attribute we didn’t think this was too difficult.

As already mentioned, to facilitate this we wrote a ‘template’ module, and documentation on how to use it.  This was a very simple dummy module that showed how to do all the basics.  In particular it showed what code to write at startup (a couple of standard methods), how to get hold of a service, and how to set up a menu item and associated event.


We realized after a few iterations of the system that we needed a reasonably sophisticated approach to versioning and loading of components.  As a result we wrote an assembly loader.  This:

  • Allows each module to keep its own assemblies in its own folder
  • Allows different modules to use different versions of the same assembly
  • Also allows different modules to explicitly share the same version of an assembly

Our default behaviour is that when loading an assembly that’s not in the root folder, the system checks all module folders for an assembly of that name and loads the latest version found.  This means teams can release interface assemblies without worrying about old versions in other folders.

Versioning of Core Components

For core components clearly there’s some code that has to be used by everyone (e.g. the shell form itself, and menus).  This has to be backwards compatible at each release because we don’t want everyone to have to release simultaneously.  We achieve this through the standard CAB pattern of interface assemblies: module teams only access core code through interfaces that can be extended, but not changed.

However, as mentioned above, the core team also writes control assemblies that aren’t backwards compatible: teams include them in their own module, and can upgrade whenever they want without affecting anyone else.

User Interface Design

For the user interface design, after a couple of iterations we settled on simple docking in the style of Visual Studio.  For this we used Weifen Luo’s excellent docking manager, and wrote a wrapper for it that turned it into a CAB workspace.  For menuing we used the ribbon bars in the DevExpress suite.

The use of docking again keeps things simple for our developers.  We have a menu service with a method to be called that just displays a vanilla user control in a docked (or floating) window.


In large organizations it’s not uncommon for the standard client deployment mechanisms to involve complex processes and technology.  Our organization has this problem.  Early on in this project it was mandated that we would use the standard deployment mechanisms.

We tried hard to wrap our corporate process in a way that made deployment as simple as possible.  To some extent we have succeeded, although we are (inevitably) very far from a simple process.


For configuration (eventually) we used another team’s code that wrapped our centralized configuration system to allow our developers to store configuration data.  This gives us hierarchies of data in a centralized database.  It means you can easily change a setting for all users, groups of users, or an individual user, and can do this without the need for a code release.

Module Interaction

Clientside component interaction is achieved by using the standard CAB mechanisms.  If one team wants to call another team’s code they simply have to get hold of a service in the same way as they do for the core code, and make a method call on an interface.  This works well, and is one advantage of using the CAB.  Of course the service interface has to be versioned and backwards compatible, but this isn’t difficult.


For security we again wrapped our organization’s standard authentication and authorization systems so they could easily be used in our CAB application.  We extended the standard .Net Principal and Identity objects to allow authorization information to be directly accessed, and also allowed this information to be accessed via a security service.

One thing that we didn’t do so well here was the control of authorization permissions.  These have proliferated, and different teams have handled different aspects of this in different ways.  This was in spite of us setting up what we thought was a simple standard way of dealing with the issue.  The result of this is that it’s hard to understand the permissioning just by looking at our permissioning system.

Things We Didn’t Do So Well

As mentioned above, the things that didn’t go so well were largely the things we didn’t focus on in our original list of goals.

Most of these issues are about resource usage on the client.  This list is far from comprehensive: we do have other problems with what we’ve done, of course, but the issues highlighted here are the ones causing the most problems at the time of writing.

The problems included:


The Problem

We decided early on to allow each team to do threading in the way they thought was appropriate, and didn’t provide much guidance on threading.  This was a mistake, for a couple of reasons.

Threading and Exception Handling

The first problem we had with threading was the simple one of background threads throwing exceptions with no exception handler in place.  As I’m sure you know, this is pretty much guaranteed to crash the entire application messily (which in this case means bringing down 11 teams’ code).  Of course it’s easy to fix if you follow some simple guidelines whenever you spawn a background thread.  We have an exception hander that can be hooked up with one line of code and that can deal with appropriate logging and thread marshalling.  We put how to do this, and dire warnings about the consequences of not doing so, in our documentation, but to no avail.  In the end we had highly-paid core developers going through other teams’ code looking for anywhere they spawned a thread and then complaining to their managers if they hadn’t put handlers in.

Complex Threading Models

Several of our teams were used to writing serverside code with complex threading models.  They replicated these clientside, even though most of our traders don’t have anything better than a dual core machine, so any complex threading model in a workstation client is likely to be counterproductive.

Some of these models tend to throw occasional threading exceptions that are unreproducible and close to undebuggable.

What We Should Have Done

In retrospect we should have :

  • Provided some clear guidance for the use of threading in the client.
  • Written some simple threading wrappers and insisted the teams use them, horrible though that is.
  • Insisted that ANY use of threading be checked by the core team (i.e. a developer that knew about user interface threading).  The wrappers would have made it easy for us to check where threads were being spawned incorrectly (and without handlers).

Start Up

The Basic Problem

We have a problem with the startup of the system as well: it’s very slow.

Our standard startup code (in our template module) is very close to the standard SCSF code.  This allows teams to set up services and menu items when the entire application starts and the module is loaded.

This means the module teams have a hook that lets them run code at startup.  The intention here is that you instantiate a class or two, and it should take almost no time.  We didn’t think that teams would start using it to load their data, or start heartbeats, or worse, to fire off a bunch of background threads to load their data.  However, we have all of this in the system.

Of course the reason for this is that the place where this code should actually be is when a user clicks a menu item to load the team’s screen for the first time.  For heartbeats, it’s a little hard to control startup and closedown when a screen opens and closes: it’s much easier to just start your heartbeats when the application starts.  For data loading for a screen, if this is slow it becomes very obvious if it happens when a user requests a screen.

However, the impact of this happening over 11 development teams’ code is that the system is incredibly slow to start, and very very fragile at startup.  It will often spend a couple of minutes showing the splash screen and then keel over with an incomprehensible error message (or none).  As a result most traders keep the system open all the time (including overnight).  But an obvious consequence is that they are very reluctant to restart, even if they have a problem that we know a restart will fix.  Also all machines are rebooted at the weekend in our organization, so they have to sit through the application startup on a Monday morning in any case.

One further problem is that no individual team has any incentive to improve their startup speed: it’s just a big pool of slowness and you can’t tell if module X is much slower than module Y as a user.  If any one team moves to proper service creation at startup it won’t have a huge overall effect.  We have 11 teams and probably no one team contributes more than a couple of minutes to the overall startup.  It’s the cumulative effect that’s the problem.

What We Should Have Done

This is one area where we should just have policed what was going on better, and been very firm about what is and is not allowed to be run at startup.  At one stage I proposed fixing the problem by banning ANY module team’s code from running at startup, and I think if I were to build an application of this kind again then that’s what I’d do.  However, clearly a module has to be able to set up its menu items at startup (or the user won’t be able to run anything).  So we’d have to develop a way of doing this via config for this to work, which would be ugly.

One other thing that would really help would be the ability to restart an individual module without restarting the entire system.

Memory Usage

The Problem

We effectively have 11 applications running in the same process.  So with memory usage we have similar problems to the startup problems: every team uses as much memory as they think they need, but when you add it all up we can end up with instances of the system using well over 1GB of memory.  On a heavily-loaded trader’s machine this is a disaster: we’ve even had to get another machine for some traders just to run our application.

To be honest, this would be a problem for any complex trading environment.  If we had 11 separate applications doing the same things as ours the problem would probably be worse.

However, as above there’s no incentive for any individual team to address the problem: it’s just a big pool that everyone uses and no-one can see that module X is using 600MB.

What We Should Have Done

Again here better policing would have helped: we should have carefully checked every module’s memory requirements and told teams caching large amounts of data not to.  However, in the end this is a problem that is very hard to avoid: I don’t think many teams are caching huge amounts of data, it’s just there’s a lot of functionality in the client.

One thing that will help here is the move to 64-bit, which is finally happening in our organization.  All our traders have a ceiling of 4GB of memory at present (of which, as you know, over 1GB is used by Windows), so a 1GB application is a real problem.

Use of Other Dependency Injection Frameworks (Spring.Net)

The Problem

One unexpected effect of the decision not to compel teams to use the CAB was that a number of teams decided to use Spring.Net for dependency injection within their modules, rather than using the CAB dependency injection.  I have some sympathy with this decision, and we didn’t stop them.  However, Spring.Net isn’t well-designed for use in a framework of this kind and it did cause a number of problems.

  • The biggest of these is that Spring uses a number of process-wide singletons.  We had difficulties getting them to play nicely with our assembly loading.  This has resulted in everyone currently having to use the same (old) version of Spring.Net, and upgrading being a major exercise.
  • Handling application context across several modules written by different teams proved challenging.
  • If you use XML configuration in Spring.Net (which everyone does) then types in other assemblies are usually referenced using the simple assembly name only.  This invalidated some of our more ambitious assembly loading strategies.
  • The incomprehensibility associated with Spring.Net’s exception messages on initial configuration is made worse when you have multiple modules at startup.

We also had some similar problems re singletons and versioning with the clientside components of our caching technology.  Some code isn’t really compatible with single-process composite applications.

What We Should Have Done

Again we should have policed this better: many of the problems described above are solvable, or could at least have been mitigated by laying down some guidelines early on.

What I’d Change If I Did This Again

The ‘what we should have done’ sections above indicate some of the things I’d change if I am ever responsible for building another framework of this kind.  However, there are two more fundamental (and very different) areas that I would change:

Code Reviews

In the ‘what we should have done’ sections above I’ve frequently mentioned that we should have monitored what was happening in the application more carefully.  The reasons we didn’t were partially due to resourcing, but also to some extent philosophical.  Most of our development teams are of high quality, so we didn’t feel we needed to be carefully monitoring them and telling them what to do.

As you can see from the problems we’ve had, this was a mistake.  We should have identified the issues above early, and then reviewed all code going into production to ensure that there weren’t threading, startup, memory or any other issues.

Multiple Processes

The second thing I’d change is technical.  I now think it’s essential in a project of this kind to have some way of running clientside code in separate processes.  As we’ve seen many of the problems we’ve had have arisen because everything is running in the same process:

  • Exceptions can bring the process down, or poorly-written code can hang it
  • It’s hard to identify how much each module is contributing to memory usage or startup time
  • There’s no way of shutting down and unloading a misbehaving module

I think I’d ideally design a framework that had multiple message loops and gave each team its own process in which they could display their own user interface.  This is tricky, but not impossible to do well.

Note that I’d still write the application as a framework.  I’d make sure the separate processes could communicate with each other easily, and that data could be cached and shared between the processes.

As an aside a couple of alternatives to this are being explored in our organization at present.  The first is to simply break up the application into multiple simpler applications.  The problem with this is that it doesn’t really solve the memory usage or startup time problems, and in fact arguably makes them worse.  The second is to write a framework that has multiple processes but keeps the user interface for all development teams in the same process.  This is obviously easier to do technically than my suggestion above.  However for many of our modules it would require quite a bit of refactoring: we need to split our the user interface code cleanly and run it in a separate process to the rest of the module code.

August 8, 2011

A Beginner’s Guide To Credit Default Swaps (Part 4)


This post continues the discussion of changes in the credit default swap (CDS) since 2007.  Part 2 and part 3 of this series of articles discussed changes in the mechanics of CDS trading.  This part will discuss changes around how credit events are handled, and future changes in the market.

Changes in the CDS Market re Credit Events Since 2007

  • Determination committees (DCs) have been set up to work out if a credit event has occurred, and to oversee various aspects of dealing with a credit event for the market.  A ‘determination committee’ is simply a group of CDS traders of various kinds, although overseen by ISDA (the standards body). The parties to one of the new standard contracts agree to be bound by the committee’s decisions.
  • Auctions are now conducted to determine the price to cash-settle credit default swaps when there is a credit event.  For this we need to determine the current price of the bonds in default.  To do this we get a group of dealers to quote prices at which they are prepared to trade the bonds (and may have to), and then calculate the price via an averaging process.  This can get quite complicated.  The determination committees oversee these auctions.
  • Classes of events that lead to credit events have been simplified.  In particular whether ‘restructuring’ is a credit event has been standardized (although the standards are different in North America, Asia and Europe).  ‘Restructuring’ means such things as changing the maturity of a bond, or changing its currency.
  • There is now a ‘lookback period’ for credit events regardless of when a CDS is traded.  What this means is that credit events that have happened in the past 60 days (only) can trigger a contract payout.  This simplifies things because the same CDS traded on different days is now treated identically in this regard.

Terminology and a Little History

The changes described so far in this article were introduced in 2009.  For North America, which went first, this was known as ‘CDS Big Bang’.  The standard contract terms thus introduced were known as the ‘Standard North American CDS Contract’ or ‘SNAC’ (pronounced ‘snack’).  The later changes in Europe were known as the ‘CDS Small Bang’The final standardization of Asian contracts occurred later still.

Much more detail on all of this can be found on the links to the excellent MarkIt papers above.

Future Changes

Further standardization in the credit default swap market will occur as a result of the Dodd-Frank Act in the USA. This mandates that standard swaps (such as standard CDS) be traded through a ‘swap execution facility’ (SEF). It further mandates that any such trades be cleared through a central clearing house.  Europe is likely to impose a similar regulatory regime, but is behind the United States.  More detail on SEFs and clearing houses is below.

The primary aims of these changes are:

1/ Greater transparency of trading. Currently many swaps are traded over-the-counter with no disclosure other than between the two counterparties. This makes it different to assess the size of the market, or the effects of a default.

2/ Reduced risk in the market overall from the bankruptcy of one participant.

The exact details of these changes are still being worked on by the regulators.

Swap Execution Facilities (SEFs)

At the time of writing it’s not even clear exactly what a ‘SEF’ is.  The Act defines a SEF as a “facility, trading system or platform in which multiple participants have the ability to execute or trade Swaps by accepting bids and offers made by other participants that are open to multiple participants”. That is, a SEF is a place where any participant can see and trade on current prices. There are some additional requirements of SEFs relating to providing public data relating to price and volume, and preventing market abuses.

In many ways a SEF will be very similar to an existing exchange. As mentioned the exact details are still being worked on.

A number of the existing electronic platforms for the trading of CDS are likely to become SEFs.

Clearing Houses

Central clearing houses are another mechanism for reducing risk in a market.

When a trade is done both parties to the trade can agree that it will be cleared through a clearing house.  This means that the clearing house becomes the counterparty to both sides of the trade: rather than bank A buying from bank B, bank A buys from the clearing house, and bank B sells to the clearing house.

Obviously the clearing house has no risk from the trades themselves.  The clearing house is exposed to the risk that either bank A or bank B goes bankrupt and thus can’t pay its obligations from the trade.  To mitigate this the clearing house will demand cash or other assets from both banks A and B.  This is known as ‘margin’.

The advantage of this arrangement is that the clearing house can guarantee that bank A will be unaffected even if bank B goes bankrupt.  The only counterparty risk for bank A is that the clearing house itself goes bankrupt.  This is unlikely since the clearing house will have no market risk, be well capitalized, and demands margin for all transactions.

Clearing houses and exchanges are often linked (and may be the same entity), but they are distinct concepts: the exchange is the place where you go to get prices and trade, the clearing house deals with the settlement of the trade. Usually clearing houses only have a restricted number of ‘members’ who are allowed to clear trades. Anyone else wanting clearing services has to get them indirectly through one of these members.

At the time of writing there are already a few central clearing houses for credit default swaps in operation, and more are on the way.


Since 2007 contracts for credit default swaps have been standardized.  This has simplified the way in which the market works overall: it’s reduced the scope for difficulties when a credit event happens, simplified the processing of premium payments, and allowed similar CDS contracts to be netted together more easily.  At the same time it has made understanding the mechanics of the market more difficult.

Further changes are in the pipeline for the CDS market to use ‘swap execution facilities’ and clearing houses.

August 6, 2011

Closures in C#

Filed under: .net, c#, closure — Tags: , , — richnewman @ 3:08 am


There seems to be some confusion about closures in C#: people are mystified as to what they are and there’s even an implication that they don’t work the way you’d expect.

As this short article will explain they are actually quite simple, and do work the way you’d expect if you’re an object oriented programmer.  The article will also briefly look at why there is confusion surrounding them, and discuss whether they are an appropriate tool in an object oriented program.


Wikipedia defines a closure as ‘a first-class function with free variables that can be bound in the lexical environment’.  What that means is that a ‘closure’ is a function that can access variables from the environment where it is declared without them being explicitly passed in as parameters.  In object-oriented programming a method in a class is a closure of sorts: it can access fields of the class directly without needing them to be passed in.  However, more usually in C# ‘closure’ refers to a function declared as an anonymous delegate that uses variables that are not explicitly passed into it, but are available at the point it is created.

Basic Example

Consider the code below:

        internal void MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'

Here we define an anonymous delegate (a function) ‘action’, and call it.  This increments x and outputs it to the console, even though x isn’t passed into it.  x is simply declared in the ‘lexical environment’ (the calling method).

‘action’ is a closure, and we can say it is ‘closed over’ x, and that x is an ‘upvalue’.  Note that it’s only a closure because of the way it uses x.  Not all anonymous delegates are closures.

By the way, in the Java community anonymous functions frequently are referred to as ‘closures’.  (The Java community has been debating whether ‘closures’ should be added to the language for some time.)

Where’s the Confusion?

The example above is pretty clear and simple.  So how has it caused confusion?

The answer is that the value of x in MyFunction after the ‘action’ call is now 2.  Furthermore, x is completely shared between MyFunction and the action delegate: any code that changes the value in one changes the value in the other.

Consider the code below:

        internal void MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'
             action();              // Outputs '4'

Here we call ‘action’, increment x in the calling method (MyFunction), and then call ‘action’ again.  Overall we started with x at 1, and incremented it 3 times, twice in our ‘action’ delegate and once in the calling method.  So it’s no surprise that the shared variable ends up with a value of 4.

This shows that we can change our ‘upvalue’ in the calling method and it is then reflected in our next call to the function: x is genuinely shared in this example.

Whilst this is a little odd (see below) it’s perfectly logical: x is shared between the calling method and any calls to our ‘action’ function.  There’s only one version of x.

This isn’t the way closures work in most functional programming languages (not that you could easily implement the example above, since all variables are immutable in functional languages).  The concept of closures has come from functional languages, so many people are surprised to find them working this way in C#.  There is more on this later.

Closures and Scope

This becomes even odder if we allow the local variable x to go out of scope (which would usually lead to it being destroyed), but retain a reference to the delegate that uses it:

        internal void Run()
             Action action = MyFunction();
             action();             // Outputs '5'

        internal Action MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'
             action();              // Outputs '4'
             return action;

Here our Run method retrieves the action delegate from MyFunction and calls it.  When it calls it the value of x is 4 (from the activity in MyFunction), so it increments that and outputs 5.  At this point MyFunction is out of scope so the local variable x would normally have been destroyed.

Again, logically this is what we’d expect, but it looks strange.

This also gives an indication that closures are hard to implement.

A Little Odd?

For an object-oriented programmer this is a little odd.  This is because we’re not used to being able to share local variables with a separate method in this way.

Normally if we want a method to act on a local variable we have to pass it in as a parameter.  If the local variable is a value type as above it gets copied, and changing it in the method will not affect its value in the calling method.  So if we wanted to use it in the calling method we’d have to pass it back explicitly as a return value or an output parameter.

Of course, there are good reasons why we don’t usually allow a method to access any variable in a calling method (quite apart from the practicalities of actually being able to do it with methods other than anonymous functions).  These are to do with encapsulation and ensuring we can maintain state in a way that’s easy to deal with.  We only really allow data to be shared at a class level, or globally if we’re using static variables, although in general we try to keep them to a minimum.

So in some ways closures of this kind break our usual object-oriented encapsulation.  My feeling is that they should be used sparingly in regular object-oriented code as a result.

Other writers have gone further than this, because if you don’t understand that an upvalue is fully shared between the anonymous function and the calling code you can get unexpected behaviour.  See, for example, this article ‘Closures in C# Can Be Evil’.


The concepts behind closures in C# are actually fairly straightforward.  However, if we use them it’s important we understand them and the effects on scope, or we may get behaviour we don’t expect.

August 4, 2011

A Beginner’s Guide to Credit Default Swaps (Part 3)


Part 1 of this series of articles described the basic mechanics of a credit default swap.

Part 2 started to describe some of the changes in the market since part 1 was written.  This part will continue that description by describing the upfront fee that is now paid on a standard CDS contract, and the impact of the changes on how CDS are quoted in the market.

Standard Premiums mean there is a Fee

Part 1 discussed how CDS contracts have been standardized.  One of the ways in which they have been standardized is that there are now standard premiums.

Now consider the case where I buy protection on a five-year CDS.  I enter into a standard contract with a premium of 500 basis points (5%).  It may be that the premium I would have paid under the old nonstandard contract for the same dates and terms would have been 450 basis points.  However, now I’m paying 500 basis points.

Clearly I need to be compensated for the 50 bps difference or I won’t want to enter into the trade under the new terms.

As a result an upfront fee is paid to me when the contract is started.  This represents the 50 basis points difference over the life of the trade, so that I am paying the same amount overall as under the old contract.

Note that in this case I (the protection buyer) am receiving the payment, but it could easily be that I pay this upfront fee (if, for example, the nonstandard contract would have traded at 550 bps).

Upfront Fee Calculation

The calculation of the fee from the ‘old’ premium (spread) is not trivial.  It takes into account discounting, and also the possibility that the reference entity will default, which would mean the premium would not be paid for the full life of the trade.  However, this calculation too has been standardized by the contracts body (ISDA).  There is a standard model that does it for us.

The Full First Coupon means there is a Fee

In the example in part 1 I discussed how I might pay for a full three months protection at the first premium payment date for a CDS trade, even though I hadn’t had protection for three months.

Once again I need compensation for this or I will prefer to enter into the old contract.  So once again there is a fee paid to me when I enter into the trade.

This is known as an ‘accrual payment’ because of the similarity to accrued interest payment for bonds.  Here the calculation is simple: it’s the premium rate applied to the face value of the trade for the period from the last premium payment date to the trade date.

That is, it’s the amount I’ll be paying for protection that I haven’t received as part of the first premium payment.  Note no discounting is applied to this.

Upfront Fee/Accrual Payment

So in summary the new contract standardization means that a payment is now always made when a standard CDS contract is traded.

Part of the payment is the upfront fee that compensates for the difference between the standard premium (100 or 500 bps in North America) and the actual premium for the trade.  This can be in either direction (payment from protection buyer to seller or vice versa).  Part of the payment is the accrual payment made to the protection buyer to compensate them for the fact that they have to make a full first coupon payment.

How CDS are Quoted in the Market

Prior to these changes CDS were traded by simply quoting the premium that would be paid throughout the life of the trade.
With the contract standardization clearly the premium paid through the life of the trade will not vary with market conditions (it will always be 100 or 500 bps in North America, for example), so quoting it makes little sense.

Instead the dealers will quote one of:

a) Points Upfront
‘Points upfront’ or just ‘points’ refer to the upfront fee as a percentage of the notional.  For example, a CDS might be quoted as 3 ‘points upfront’ to buy protection.  This means the upfront fee (excluding the accrual payment) is 3% of the notional.  ‘Points upfront’ have a sign: if the points are quoted as a negative then the protection buyer is paid the upfront fee by the protection seller.  If the points are positive it’s the other way around.

b)  Price
With price we quote ‘like a bond’. We take price away from 100 to get points:
That is, points = 100 – price.  So in the example above where a CDS is quoted as 3 points to buy protection, the price will be 97.   The protection buyer still pays the 3% as an upfront fee of course.

c)  Spread
Dealers are so used to quoting spread that they have carried on doing so in some markets, even for standard contracts that pay a standard premium.  That is they still quote the periodic premium amount you would have been paying if you had bought prior to the standardization.  As already mentioned, there is a standard model for turning this number into the upfront fee that actually needs to be paid.


This part concludes the discussion of the changes in the mechanics of CDS trading since 2007.  As you can see, in many ways the standardization of the CDS market has actually made it more complicated.  The things to remember are that premiums, premium and maturity dates, and the amounts paid at premium dates have all been standardized in a standard contract.  This has meant there is an upfront fee for all standard CDS, and that they are quoted differently in the market from before.  It has also meant that CDS positions can be more easily netted against each other, and that the mechanics of calculating and settling premiums have been simplified.

Part 4 of this series will examine some of the other changes since 2007, and changes that are coming.

July 19, 2011

A Beginner’s Guide to Credit Default Swaps (Part 2)


Part 1 of the ‘Beginner’s Guide to Credit Default Swaps’ was written in 2007. Since that time we have seen what many are calling the greatest financial crisis since the Great Depression, and a global recession.

Rightly or wrongly, some of the blame for the crisis has been attributed to credit derivatives and speculation in them.  This has led to calls for a more transparent and better regulated credit default swap (CDS) market. Furthermore the CDS market has grown very quickly, and by 2009 it had become clear that some simple changes to operational procedures would benefit everyone.

As a result many changes in the market have already been implemented, and more are on the way. This article will discuss these changes.  It will focus primarily on how the mechanics of trading a credit default swap have changed, rather than the history of how we got here or why these changes have been made. I’ll also briefly discuss the further changes that are on the way.

Overview of the Changes

The first thing to note is that nothing has fundamentally changed from the description of a credit default swap in part 1. A credit default swap is still a contract that provides a kind of insurance against a company defaulting on its bonds. If you have read and understood part one then you should understand how a credit default swap works.

The main change that has happened is that credit default swap contracts have been standardized. This standardization falls into three broad categories:

  1. Changes to the premium, premium and maturity dates, and premium payments that simplify the mechanics of CDS trading.
  2. Changes to the processes around identifying whether a credit event has occurred.
  3. Changes to the processes around what happens when a credit event has occurred.

Items 2 and 3 are extremely important, and have removed many of the problems that were discussed in part 1 relating to credit events. However, they don’t affect the way credit default swaps are traded as fundamentally as item 1, and are arguably more boring, so we’ll start with item 1.

The Non-Standard Nature of Credit Default Swaps Previously

If I buy 100 IBM shares and then buy 100 more I know that I have a position of 200 IBM shares.  I can go to a broker and sell 200 IBM shares to get rid of (close out) this position.

One of the problems with credit default swaps (CDS) as described in part 1 of this series of articles is that you couldn’t do this.  Every CDS trade was different, and it was consequently difficult to close out positions.

Using the description in part 1, consider the case where I have some senior IBM bonds.  I have bought protection against IBM default using a five year CDS.  Now I decide to sell the bonds and want to close out my CDS.  It’s difficult to do this by selling a five year CDS as described previously.  Even if I can get the bonds being covered, the definition of default, the maturity date and all the premium payment dates to match exactly it’s likely that the premiums to be paid will be different from those on the original CDS.  This means a calculation has to be done for both trades separately at each premium payment date.


To address this issue a standard contract has been introduced that has:

1.  Standard Maturity Dates

There are four dates per year, the ‘IMM dates’ that can be the maturity date of a standard contract: 20th March, 20th June, 20th September, and 20th December.  This means that if today is 5th July 2011 and I want to trade a standard five-year CDS I will normally enter into a contract that ends 20th September 2016.  It won’t be a standard CDS if I insist my maturity date has to be 5th July 2016.

2.  Standard Premium Payment Dates

The same four dates per year are the dates on which premiums are paid (and none other).  As a result three months of premium are paid at every premium payment date.

Note that the use of IMM dates for CDS maturity and premium payment dates was already common when I wrote part 1 of the article.

3.  Standard Premiums

In North America, standard contracts ONLY have premiums of 100 or 500 basis points per annum (1% or 5%).  In Europe, Asia and elsewhere a wider range of premiums is traded on standard contracts, although this is still restricted.  How this works in practice will be explained in part 3.

4.  Payment of Full First Coupon

Standard contracts pay a ‘full first coupon’.  What this means is that if I buy a CDS midway between the standard premium payment dates I still have to pay a full three months’ worth of premium at the next premium date.  Note that ‘coupon’ here means ‘premium payment’.

For example, if I enter into a CDS with face value $100m on 5th July 2011 with a premium of 5% I will have to pay 3 months x 5% x 100m on the 20th September.  This is in spite of the fact that I have not been protected against default for the full three months.

Note that for the standard premiums and the payment of full first coupon to work we now have upfront fees for CDS.  Again this will be explained in more detail in part 3.

Impact of these Changes

What all this means is that we have fewer contract variations in the market.  The last item in particular means that a position in any given contract always pays the same amount at every premium date: we don’t need to make any adjustments for when the contract was traded.

In fact, in terms of the amount paid EVERY contract with the same premium (e.g. 500 bps) pays the same percentage of face value at a premium date, regardless of reference entity.  This clearly simplifies coupon processing.  It also allows us to more easily net positions in credit default swaps in our systems.


One of the major changes in the CDS market since part 1 was written is that contracts have been largely standardized.  More detail on this and other changes will be given in part 3.

May 28, 2010

A Comparison of Some Dependency Injection Frameworks: Part 7 Spring JavaConfig


This series of articles is examining a number of dependency injection frameworks.  Part 3 of the series looked at the Spring framework configured with XML.  Part 5 and part 6 looked at Guice.  This article will apply the same tests to the Spring framework but configured in code using the JavaConfig download.


The code for this article is available.

Note that JavaConfig used to be a separate download but is now being folded into core Spring (in Spring 3.0 and later versions).  However for this article I used the separate download version and Spring 2.2. 


Spring JavaConfig is very similar to Spring XML except that all configuration is done in code rather than using XML.  The syntax for retrieving objects from a configured container is identical to Spring XML.

Specifically configuration is done by writing a class and annotating it with the @Configuration annotation.   We then configure individual objects by writing methods within the class.  We make the methods return the object we require, usually by just instantiating it with the ‘new’ keyword.  Then we annotate the method with the @Bean annotation:

public class ApplicationConfig {
      public Movie AndreiRublevMovie() {
            return new Movie("Andrei Rublev", "Andrei Tarkovsky");

The effect of this is identical to configuring a bean in XML in Spring XML.  The method name in JavaConfig (AndreiRublevMovie) corresponds to the ID of the object in Spring XML.  As we shall see, the various techniques for configuring beans in Spring XML are also available in JavaConfig.

We configure our container with a different application context and a constructor that takes the name of the configuration class (or classes):

            JavaConfigApplicationContext context = new JavaConfigApplicationContext(ApplicationConfig.class);

We can now retrieve objects from the container using the getBean method and the method name as a (string) identifier as usual:

            Movie andreiRublev = (Movie) context.getBean("AndreiRublevMovie");

Testing Spring JavaConfig

JavaConfig uses exactly the same syntax for retrieving and using objects as does Spring XML.  As a result our StartUp class for JavaConfig is almost identical to the StartUp class for Spring XML.  The only difference is the application context creation described above.

So the only code differences between the tests for Spring XML and JavaConfig are in the way the container is configured.  As you can see this is very straightforward.

I’m not going to go through the configuration for all the tests as a result. 

Note that:

  • For tests 2 and 3 it’s very easy to define different objects to be injected into the same class in different circumstances: you just instantiate two instances of the class in different methods and give them the dependent objects as shown below, where simpleMovieFinder and colonDelimitedMovieFinder are configuration methods that return the appropriate MovieFinder objects.
  • For tests 4 and 5, to specify singleton or prototype scope you just add the ‘scope’ annotation (the default is singleton).  This is also shown below.
      public MovieLister simpleMovieLister() {
            return new MovieLister(simpleMovieFinder());
      public MovieLister colonDelimitedMovieLister() {
            return new MovieLister(colonDelimitedMovieFinder());
  • For test 6, to use a factory method to create a class is again intuitive and simple.  You just create the factory in one configuration method and then call the factory method on it directly where you want your object returned in a second configuration method:
      public ComplexMovieListerFactory complexMovieListerFactory() {
            return new ComplexMovieListerFactory();
      public MovieLister complexMovieLister() {
            return complexMovieListerFactory().build();


JavaConfig does seem a step forward compared to the XML configuration problems we have previously been wrestling with for dependency injection.  In  particular it has the following advantages:

  • There is no XML, and the configuration in code means that there is some level of compile-type checking.  However, you can (and usually do) use string identifiers to retrieve objects, so there is still scope for typos that won’t manifest themselves until runtime.   Note that it is possible to retrieve objects from the container by type to avoid this.
  • There is no need for code changes in classes (no annotations are applied to the classes you are injecting or injecting into)
  • The configuration class is very simple, and uses syntax familiar to any developer.  Unlike any other framework here you’re not being compelled to learn something radically different to use JavaConfig.  All you have to learn is how the few attributes you need are used.
  • We’re configuring in code using usual Java syntax.   This makes some of the constructs we have special syntax for in other frameworks seem very odd and cumbersome.  In particular the factory test above almost seems silly: we get our object by returning it from a method, obviously we can do that by just instantiating it OR by instantiating another class that will build it and return it from a build method.

JavaConfig is so simple that it makes you wonder why you need a framework at all.  Previously in this series of articles I’ve suggested that the tests may be a little more advanced than you’d actually use in practice.  But apart from tests 3 and 4 (singleton and prototype creation) none of the tests really need the Spring framework at all now we’re using JavaConfig.  We could take all the attributes out and just use the ApplicationConfig class as a factory class.  Even tests 3 and 4 could easily be coded without the framework.  More on this later.


Code configuration as used in JavaConfig seems a simple and intuitive alternative to the more usual XML configuration.

Part 8 of this series will look at Microsoft’s Unity framework.

May 22, 2010

A Comparison of Some Dependency Injection Frameworks: Part 6 Guice (Java) Continued


Part 5 of this series of articles started examining Guice as a dependency injection framework.  This article completes that, and makes some general comments on the Guice framework.

The code for this article is available (it’s the same code as in part 5).

Testing Guice (Continued)

Tests 4 and 5:  Creation of an object with singleton scope, creation of an object with prototype scope

As before we run tests 4 and 5 by making the MovieLister class from test 2 have singleton scope, and the MovieLister from test 3 have prototype scope.  We do this in the private module configuration classes set up in test 3.

The syntax for specifying that a class is a singleton is straightforward, and can be seen in the SimpleMovieListerPrivateModule class:


Note that we’ve just added the simple syntax ‘.in(Singleton.class)’ to our binding to specify the singleton.

The default scope is prototype here, so in the ColonDelimitedMovieListerPrivate module class we don’t need to do anything special.  So if we make the simple singleton change above the tests pass.

Test 6: Use of a factory class and method to create a dependent object.

Test 6 is intended to test our ability to use code (in a different) class to generate the object we require when we request it from the container.  Here we want to return a MovieLister from a factory class.

In Guice you can do this with a ‘provider’: the provider is the equivalent of the factory from earlier Spring examples.  For our test we write a class that implements the Provider<MovieLister> interface.  As you can see this interface has a ‘get’ method that returns a MovieLister, which is our factory method.

We need to bind this so that when we request a MovieLister our factory method gets called.  To do this we put the binding code below into our main module (configuration) class:


Note that we need an annotation because we already have other configurations that return type MovieLister from the container.  Now if we request an object from the container with the MovieLister/Complex key as below Guice will run the get method on our ComplexMovieListerFactory and return the result, which is what we want:

            MovieLister complexLister = guiceInjector.getInstance(Key.get(MovieLister.class, Complex.class));
            Movie[] tarkovskyMovies = complexLister.moviesDirectedBy("Andrei Tarkovsky");
            for(Movie movie : tarkovskyMovies){

Whilst this isn’t particularly complex it is quite a different approach to the configuration approaches we’ve already seen in Guice: now we’re implementing an interface to get Guice to configure an object.

Test 7: Injection of the container itself (so it can be used to retrieve objects).

The code for this again is in the ComplexMovieListerFactory class.  Here Guice is actually simpler than Spring, since it doesn’t require implementation of a specific interface to inject the container.  You simply use the usual syntax in your class: here we write a constructor with the appropriate signature to take the container, and mark that constructor with the @Inject attribute:

public class ComplexMovieListerFactory implements Provider<MovieLister> {
      private Injector injector;
      public ComplexMovieListerFactory(Injector injector) {
            this.injector = injector;

The rest of the ComplexMovieListerFactory class is fairly self-explanatory.  One thing to note is that (as far as I can see) the Guice container has no equivalent of Spring’s ‘getBeansOfType’ method, so we’re having to retrieve the Movie objects individually from the container.  My feeling is this is not really important as this is a little unrealistic: in practice: we’d be unlikely to want to retrieve instances of all objects in the container of a given type.

Comments on Guice

Guice is directly trying to deal with some of the problems with XML configuration I outlined in the first article in this series.  In particular it’s trying to remove those incomprehensible runtime errors you can get if  you your XML configuration is wrong.

For simple injection scenarios it does seem like a step forward.  However, I find Guice quite hard to like (and when I started this analysis I really wanted to like it).

  • There are several different ways of configuring an object.  As you can see from the length of this article and part 5, this can get quite confusing and difficult.  You have to learn and understand the different scenarios to be able to use the framework.
  • The different configuration methods are used in different circumstances depending on the complexity of the configuration scenario.  As we have seen, this can include changing the configuration for an existing object when a new object of the same type needs to be configured.  Again you have to understand and be able to handle this. 
  • For any moderately complex injection scenario you are having to change the code of your classes by annotating them.  This effectively means we have construction logic back in our code classes, albeit in a limited way and in the form of metadata rather than actual code.
  • The code in the configuration classes isn’t particularly intuitive.  In particular the use of private modules for the solution to the so-called ‘robot legs’ problem leads to quite complex code.

Overall my feeling is that Guice does address some of the issues with XML configuration that we see in frameworks like Spring.  However it then introduces a number of issues of its own.  Obviously I have tested with some reasonably advanced scenarios here.  Some of these may not need to be used that frequently in real production code. I haven’t personally used Guice on any real life project.  But on the basis of this analysis I think I would be reluctant to attempt to do so.


Guice was the most disappointing framework I looked at.  Its design goals are explicitly to solve some of the problems with XML configuration discussed in part 1 of this series of articles.  However, it seems to me it does this in a way that only works well in very simple dependency injection scenarios.  As soon as there is a moderate level of complexity the developer is having to learn several different configuration techniques, some of which aren’t easy to use, and how to apply them.

Part 7 will look at Spring JavaConfig.

« Newer PostsOlder Posts »

Blog at