Connecting to Tibco EMS from a C# Client


This is a slightly esoteric post, but hopefully it will be of use somewhere.  Below is some reasonably simple code that shows the basic use cases for connecting to a Tibco EMS Server topic from a C# client using Tibco’s TIBCO.EMS.dll.

It actually points at a locally installed instance of EMS.  That is, the server code is running on the workstation.  If you have the Windows EMS installation you can just install it in the default path and the instructions below should work.  Alternatively it’s relatively simple to change the server name from ‘localhost’ and put a username and password in the code to connect to a secure EMS Server.

How to Use the Code

  • Get hold of the Windows Tibco EMS install, and install it on your workstation on the default path.  Note that Tibco do NOT provide this for free: you need to have paid them for a licence.  If you work for a large organization you will probably find you already have a licence.
  • Create a C# console application called ‘TestEMS’, and reference TIBCO.EMS.dll (the C# EMS assembly) from the install.
  • Paste in the code below.
  • Run the code.  You should see a console window showing that a message has been sent by a publisher and received by a subscriber.

If you want to connect to an existing EMS server in your organization all you need is the TIBCO.EMS.dll assembly, not a full client install. You need to change the appropriate parts of the code below to point to the server: change ‘localhost’ to the server name, and add a username and password if they are needed.  Obviously you don’t need to start the server locally if you do this.

What the Sample Does

As you can see, the code starts the local EMS server, then creates a topic publisher and a subscriber to the same topic.  It then sends a ‘Hello World’ message using the publisher.  The subscriber’s message handler prints out the message to the console when it receives it.

The sample code is slightly more sophisticated than the most basic use case.  It also shows how to set a property on a message, and then to how filter the subscription using a message selector based on the property.  The message is sent with an additional property of ‘Owner’ with value ‘Rich Newman’.  The subscriber is only listening for messages with an Owner property that contains the string ‘Rich Newman’.  You can see this if you change the name of the owner in the message that’s sent: the message listener will not get the message.


using System;
using System.Diagnostics;
using System.Threading;
using TIBCO.EMS;

namespace TestEMS
    class Program
        static void Main(string[] args)
            Console.WriteLine("Test started");
            new Program().Run();

        private void Run()
            CreateClientTopicSubscriber("Owner LIKE '%Rich Newman%'"); // Pass "" for no message selector
            EMSServerPublishThisMessage("Hello World""Owner""Rich Newman");

        #region EMS Server
        private const string tibcoEMSPath = @"C:\tibco\ems\5.0\bin\";
        private readonly string tibcoEMSExecutable = tibcoEMSPath + "tibemsd.exe";
        private Process tibcoEMSProcess;
        public void StartEMSServer()
            tibcoEMSProcess = new Process();
            ProcessStartInfo processStartInfo = new ProcessStartInfo(tibcoEMSExecutable);
            tibcoEMSProcess.StartInfo = processStartInfo;
            processStartInfo.WorkingDirectory = tibcoEMSPath;
            bool started = tibcoEMSProcess.Start();

        TopicConnection publisherConnection;
        TopicSession publisherSession;
        TopicPublisher emsServerPublisher;
        private void CreateEMSServerTopicPublisher()
            TopicConnectionFactory factory = new TIBCO.EMS.TopicConnectionFactory("localhost");
            publisherConnection = factory.CreateTopicConnection(""""); // Username, password
            publisherSession = publisherConnection.CreateTopicSession(falseSession.AUTO_ACKNOWLEDGE);
            Topic generalTopic = publisherSession.CreateTopic("GeneralTopic");
            emsServerPublisher = publisherSession.CreatePublisher(generalTopic);


        internal void EMSServerPublishThisMessage(string messagestring propertyNamestring propertyValue)
            TextMessage textMessage = publisherSession.CreateTextMessage();
            textMessage.Text = message;
            Console.WriteLine("EMS Publisher published message: " + message);


        #region EMS Client
        TopicConnection subscriberConnection;
        TopicSession subscriberSession;
        private void CreateClientTopicSubscriber(string messageSelector)
            TopicConnectionFactory factory = new TIBCO.EMS.TopicConnectionFactory("localhost");
            subscriberConnection = factory.CreateTopicConnection("""");  // Username, password
            subscriberSession = subscriberConnection.CreateTopicSession(falseSession.AUTO_ACKNOWLEDGE);
            Topic clientTopic = subscriberSession.CreateTopic("GeneralTopic");
            TopicSubscriber clientTopicSubscriber = subscriberSession.CreateSubscriber(clientTopicmessageSelectortrue);
            clientTopicSubscriber.MessageHandler += new EMSMessageHandler(test_MessageHandler);

        void test_MessageHandler(object senderEMSMessageEventArgs args)
            Console.WriteLine("EMS Client received message: " + args.Message.ToString());


Asynchronous Programming in .Net: Async and Await for Beginners


There are several ways of doing asynchronous programming in .Net.  Visual Studio 2012 introduces a new approach using the ‘await’ and ‘async’ keywords.  These tell the compiler to construct task continuations in quite an unusual way.

I found them quite difficult to understand using the Microsoft documentation, which annoyingly keeps saying how easy they are.

This series of articles is intended to give a quick recap of some previous approaches to asynchronous programming to give us some context, and then to give a quick and hopefully easy introduction to the new keywords


By far the easiest way to get to grips with the new keywords is by seeing an example.  For this initially I am going to use a very basic example: you click a button on a screen, it runs a long-running method, and displays the results of the method on the screen.

Since this article is about asynchronous programming we will want the long-running method to run asynchronously on a background thread.  This means we need to marshal the results back on to the user interface thread to display them.

In the real world the method could be running a report, or calling a web service.  Here we will just use the method below, which sleeps to simulate the long-running process:

        private string LongRunningMethod(string message)
            return "Hello " + message;

The method will be called asynchronously from a button click method, with the results assigned to the content of a label.

Coding the Example with Previous Asynchronous C# Approaches

There are at least five standard ways of coding the example above in .Net currently.  This has got so confusing that Microsoft have started giving the various patterns acronyms, such as the ‘EAP‘ and the ‘APM‘.   I’m not going to talk about those as they are effectively deprecated.  However it’s worth having a quick look at how to do our example using some of the other approaches.

Coding the Example by Starting our Own Thread

This simple example is fairly easy to code by just explicitly starting a new thread and then using Invoke or BeginInvoke to get the results back onto the UI thread.  This should be familiar to you:

        private void Button_Click_1(object sender, RoutedEventArgs e)
            new Thread(() => { 
                string result = LongRunningMethod("World");
                Dispatcher.BeginInvoke((Action)(() => Label1.Content = result)); 
            Label1.Content = "Working...";

We start a new thread and hand it the code we want to run.  This calls the long-running method and then uses Dispatcher.BeginInvoke to call back onto the user interface thread with the result and update our label.

Note that immediately after we start the new thread we set the content of our label to ‘Working…’.  This is to show that the button click method continues immediately on the user interface thread after the new thread is started.

The result is that when we click the button our label says ‘Working…’ almost immediately, and then shows ‘Hello World’ when the long-running method returns.  The user interface will remain responsive whilst the long-running thread is running.

Coding the Example Using the Task Parallel Library (TPL)

More instructive is to revisit how we would do this with tasks using the Task Parallel Library.  We would typically use a task continuation as below.

        private void Button_Click_2(object sender, RoutedEventArgs e)
            Task.Run<string>(() => LongRunningMethod("World"))
                .ContinueWith(ant => Label2.Content = ant.Result, 
            Label2.Content = "Working...";

Here we’ve started a task on a background thread using Task.Run.  This is a new construct in .Net 4.5.  However, it is nothing more complicated than Task.Factory.StartNew with preset parameters.  The parameters are the ones you usually want to use.  In particular Task.Run uses the default Task Scheduler and so avoids one of the hidden problems with StartNew.

The task calls the long-running method, and does so on a threadpool thread.  When it is done a continuation runs using ContinueWith.  We want this to run on the user interface thread so it can update our label.  So we specify that it should use the task scheduler in the current synchronization context, which is the user interface thread when the task is set up.

Again we update the label after the task call to show that it returns immediately.  If we run this we’ll see a ‘Working…’ message and then ‘Hello World’ when the long-running method returns.

Coding the Example Using Async and Await


Below is the full code for the async/await implementation of the example above.  We will go through this in detail.

       private void Button_Click_3(object sender, RoutedEventArgs e)
            Label3.Content = "Working...";        

        private async void CallLongRunningMethod()
            string result = await LongRunningMethodAsync("World");
            Label3.Content = result;

        private Task<string> LongRunningMethodAsync(string message)
            return Task.Run<string>(() => LongRunningMethod(message));

        private string LongRunningMethod(string message)
            return "Hello " + message;

Asynchronous Methods

The first thing to realize about the async and await keywords is that by themselves they never start a thread.  They are a way of controlling continuations, not a way of starting asynchronous code.

As a result the usual pattern is to create an asynchronous method that can be used with async/await, or to use an asynchronous method that is already in the framework.  For these purposes a number of new asynchronous methods have been added to the framework.

To be useful to async/await the asynchronous method has to return a task.  The asynchronous method has to start the task it returns as well, something that maybe isn’t so obvious.

So in our example we need to make our synchronous long-running method into an asynchronous method.  The method will start a task to run the long-running method and return it.  The usual approach is to wrap the method in a new method.   It is usual to give the method the same name but append ‘Async’.  Below is the code to do this for the method in our example:

        private Task<string> LongRunningMethodAsync(string message)
            return Task.Run<string>(() => LongRunningMethod(message));

Note that we could use this method directly in our example without async/await.  We could call it and use ‘ContinueWith’ on the return value to effect our continuation in exactly the same way as in the Task Parallel Library code above.  This is true of the new async methods in the framework as well.

Async/Await and Method Scope

Async and await are a smart way of controlling continuations through method scope.  They are used as a pair in a method as shown below:

        private async void CallLongRunningMethod()
            string result = await LongRunningMethodAsync("World");
            Label3.Content = result;

Here async is simply used to tell the compiler that this is an asynchronous method that will have an await in it.  It’s the await itself that’s interesting.

The first line in the method calls LongRunningMethodAsync, clearly.  Remember that LongRunningMethodAsync is returning a long-running task that is running on another thread.  LongRunningMethodAsync starts the task and then returns reasonably quickly.

The await keyword ensures that the remainder of the method does not execute until the long-running task is complete.  It sets up a continuation for the remainder of the method. Once the long-running method is complete the label content will update: note that this happens on the same thread that CallLongRunningMethod is already running on, in this case the user interface thread.

However, the await keyword does not block the thread completely.  Instead control is returned to the calling method on the same thread.  That is, the method that called CallLongRunningMethod will execute at the point after the call was made.

The code that calls LongRunningMethod is below:

        private void Button_Click_3(object sender, RoutedEventArgs e)
            Label3.Content = "Working...";        

So the end result of this is exactly the same as before.  When the button is clicked the label has content ‘Working…’ almost immediately, and then shows ‘Hello World’ when the long-running task completes.

Return Type

One other thing to note is that LongRunningMethodAsync returns a Task<string>, that is, a Task that returns a string.  However the line below assigns the result of the task to the string variable called ‘result’, not the task itself.

string result = await LongRunningMethodAsync("World");

The await keyword ‘unwraps’ the task.  We could have attempted to access the Result property of the task (string result = LongRunningMethodAsync(“World”).Result.  This would have worked but would have simply blocked the user interface thread until the method completed, which is not what we’re trying to do.

I’ll discuss this further below.


To recap, the button click calls CallLongRunningMethod, which in turn calls LongRunningMethodAsync, which sets up and runs our long-running task.  When the task is set up (not when it’s completed) control returns to CallLongRunningMethod, where the await keyword passes control back to the button click method.

So almost immediately the label content will be set to “Working…”, and the button click method will exit, leaving the user interface responsive.

When the task is complete the remainder of CallLongRunningMethod executes as a continuation on the user interface thread, and sets the label to “Hello World”.

Async and Await are a Pair

Async and await are always a pair: you can’t use await in a method unless the method is marked async, and if you mark a method async without await in it then you get a compiler warning.  You can of course have multiple awaits in one method as long as it is marked async.

Aside: Using Anonymous Methods with Async/Await

If you compare the code for the Task Parallel Library (TPL) example with the async/await example you’ll see that we’ve had to introduce two new methods for async/await: for this simple example the TPL code is shorter and arguably easier to understand.  However, it is possible to shorten the async/await code using anonymous methods, as below. This shows how we can use anonymous method syntax with async/await, although I think this code is borderline incomprehensible:

        private void Button_Click_4(object sender, RoutedEventArgs e)
            new Action(async () =>
                string result = await Task.Run<string>(() => LongRunningMethod("World"));
                Label4.Content = result;
            Label4.Content = "Working...";

Using the Call Stack to Control Continuations

Overview of Return Values from Methods Marked as Async

There’s one other fundamental aspect of async/await that we have not yet looked at.  In the example above our method marked with the async keyword did not return anything.  However, we can make all our async methods return values wrapped in a task, which means they in turn can be awaited on further up the call stack.  In general this is considered good practice: it means we can control the flow of our continuations more easily.

The compiler makes it easy for us to return a value wrapped in a task from an async method.  In a method marked async the ‘return’ statement works differently from usual.  The compiler doesn’t simply return the value passed with the statement, but instead wraps it in a task and returns that instead.

Example of Return Values from Methods Marked as Async

Again this is easiest to see with our example.  Our method marked as async was CallLongRunningMethod, and this can be altered to return the string result to the calling method as below:

        private async Task<string> CallLongRunningMethodReturn()
            string result = await LongRunningMethodAsync("World");
            return result;

We are returning a string (‘return result’), but the method signature shows the return type as Task<string>.  Personally I think this is a little confusing, but as discussed it means the calling method can await on this method.  Now we can change the calling method as below:

        private async void Button_Click_5(object sender, RoutedEventArgs e)
            Label5.Content = "Working...";
            string result = await CallLongRunningMethodReturn();
            Label5.Content = result;

We can await the method lower down the call stack because it now returns a task we can await on.  What this means in practice is that the code sets up the task and sets it running and then we can await the results from the task when it is complete anywhere in the call stack.  This gives us a lot of flexibility as methods at various points in the stack can carry on executing until they need the results of the call.

As discussed above when we await on a method returning type Task<string> we can just assign the result to a string as shown.  This is clearly related to the ability to just return a string from the method: these are syntactic conveniences to avoid the programmer having to deal directly with the tasks in async/await.

Note that we have to mark our method as ‘async’ in the method signature (‘private async void Button_Click_5′) because it now has an await in it, and they always go together.

What the Code Does

The code above has exactly the same result as the other examples: the label shows ‘Working…’ until the long-running method returns when it shows ‘Hello World’.  When the button is clicked it sets up the task to run the long-running method and then awaits its completion both in CallLongRunningMethodReturn and Button_Click_5.  There is one slight difference in that the click event is awaiting: previously it exited.  However, if you run the examples you’ll see that the user interface remains responsive whilst the task is running.

What’s The Point?

If you’ve followed all the examples so far you may be wondering what the point is of the new keywords.  For this simple example the Task Parallel Library syntax is shorter, cleaner and probably easier to understand than the async/await syntax.  At first sight async/await are a little confusing.

The answer is that for basic examples async/await don’t seem to me to be adding a lot of value, but as soon as you try to do more complex continuations they come into their own.  For example it’s possible to set up multiple tasks in a loop and write very simple code to deal with what happens when they complete, something that is tricky with tasks.  I suggest you look at the examples in the Microsoft documentation which do show the power of the new keywords.


The full code for these examples is available to download.


This article has only covered the basics of the async/await keywords, although I think it’s addressed all the things that were confusing me when trying to learn about them from the Microsoft documentation.  There are some obvious things it hasn’t covered such as cancelling tasks, exception handling, unwrapping tasks (and why you might need to do that) and how to deal with the reentrancy problems that arise.  All of these are covered reasonably well in the documentation.

Personally I think async and await are far from intuitive: the compiler is performing some magic of a kind we don’t usually see in C#.  The result is that we are yielding control in the middle of a method to the calling method until some other task is complete.  Of course we can do similar things with regular task continuations, but the syntax makes regular continuations look slightly less magical.

However, async/await are a powerful way of controlling multithreaded code once you understand what they are doing.  They can make fairly complex threading look simple.

Why Starting a New Task in the Task Parallel Library (TPL) Doesn’t Always Start a New Thread

The Task Parallel Library in .Net is a wonderful resource.  However, one thing that is a little confusing is that it is actually possible to start a task that runs on the same thread as the current code.  That is, we can do Task.StartNew and it still runs on the same thread.  This isn’t the behaviour we expect.

This can happen particularly when we are writing .Net user interface applications where we marshal data on to the UI thread.

Consider the sample code below, which runs in the button click event of a simple WPF application.  This is a not unusual pattern for a user interface application: we start a Task on a new thread and then perform a continuation on the user interface thread using the current synchronization context, which is the user interface thread synchronization context (TaskScheduler.FromCurrentSynchronizationContext()).  We would typically do this so that we could write the results of the Task into the user interface.  There are several articles on the internet that describe this.

private void button1_Click(object sender, RoutedEventArgs e)
    Debug.WriteLine("UI thread: " + Thread.CurrentThread.ManagedThreadId);
    Task.Factory.StartNew(() => 
        Debug.WriteLine("Task 1 thread: " + Thread.CurrentThread.ManagedThreadId))
    .ContinueWith(ant =>
        Debug.WriteLine("Continuation on UI thread: " + Thread.CurrentThread.ManagedThreadId);
            => Debug.WriteLine("Task 2 thread is UI thread: " + Thread.CurrentThread.ManagedThreadId));
    }, TaskScheduler.FromCurrentSynchronizationContext());

However, in the code above we then start a new Task from the continuation (task 2).  This runs on the user interface thread by default.  This is almost certainly not what we are trying to do.  We probably want to run something on a background thread, which is what usually happens when we start a Task.  As a result it’s slightly hazardous behaviour; we can easily end up blocking the user interface thread because we think the code is running on a background thread.

The output from a couple of button clicks using this code is as below:

UI thread: 8
Task 1 thread: 9
Continuation on UI thread: 8
Task 2 thread is UI thread: 8
UI thread: 8
Task 1 thread: 10
Continuation on UI thread: 8
Task 2 thread is UI thread: 8

The reason for this behaviour is that the user interface thread synchronization context is still in force when we start our second task.

Fortunately, in this case at least there is a simple solution, even if it isn’t so obvious.  We can tell the new Task to run explicitly using the default synchronization context instead of the user interface synchronization context when we start it.  The simplest syntax for this changes the code as below.  Notice it passes TaskScheduler.Default as an argument to Task.Start.  The terminology here for a user interface application is a little confusing.  The ‘current synchronization context’ is accessed using TaskScheduler.FromCurrentSynchronizationContext, which we typically do from the user interface thread.  Then the ‘current’ context that results in delegates being queued back to the user interface thread.  The ‘default synchronization context’ is the context that results in delegates being queued to the thread pool.

private void button1_Click(object sender, RoutedEventArgs e)
    Debug.WriteLine("UI thread: " + Thread.CurrentThread.ManagedThreadId);
    Task.Factory.StartNew(() => 
        Debug.WriteLine("Task 1 thread: " + Thread.CurrentThread.ManagedThreadId))
    .ContinueWith(ant =>
        Debug.WriteLine("Continuation on UI thread: " + Thread.CurrentThread.ManagedThreadId);
        Task task2 = new Task(() 
            => Debug.WriteLine("Task 2 thread: " + Thread.CurrentThread.ManagedThreadId));
    }, TaskScheduler.FromCurrentSynchronizationContext());

The output from this is now as we would expect:

UI thread: 9
Task 1 thread: 10
Continuation on UI thread: 9
Task 2 thread: 11
UI thread: 9
Task 1 thread: 11
Continuation on UI thread: 9
Task 2 thread: 10

For more detailed discussion on this see:

Delegate Syntax in C# for Beginners


I have been programming with C# since it came out but I still find the delegate syntax confusing.  This is at least partially because Microsoft have changed the recommended syntax regularly over the years.  This article is a quick recap of the various syntaxes.  It also looks at some of the issues with using them in practice.  It’s worth knowing about all the various syntaxes as you will almost certainly see all of them used.

This article is just a recap: it assumes that you know what a delegate is and why you’d want to use one.

.Net and Visual Studio Versions

The first thing to note is that you can use any of these syntaxes as long as you are using Visual Studio 2008 or later and targeting .Net 2.0 or later.

Named methods were available in .Net 1.0, anonymous methods were introduced in .Net 2.0, and lambda expressions were introduced in .Net 3.0.  However, like much of .Net 3.0, which is based on the .Net 2.0 assemblies, lambda expressions will compile to .Net 2.0 assuming you have the appropriate version of Visual Studio.

Note also that lambda expressions can do (almost) everything anonymous methods can do, and effectively supersede them as the preferred way of writing inline delegate code.


A listing of the code for this article is availableThe complete working program is also available.

The Delegate

For all of these examples we need a delegate definition.  We’ll use the one below initially.

        private delegate void TestDel(string s);

Named Methods

Named methods are perhaps the easiest delegate syntax to understand intuitively.  A delegate is a typesafe method pointer.  So we define a method:

        private void Test(string s)

Now we create an instance of our method pointer (the delegate above) and point it at our method.  Then we can call our method by invoking the delegate.  The code below prints out ‘Hello World 1’.  This is easy enough, but all a little cumbersome.

            TestDel td = new TestDel(Test);
            td("Hello World 1");

There’s one slight simplification we can use.  Instead of having to explicitly instantiate our delegate with the new keyword we can simply point the delegate directly at the method, as shown below.  This syntax does exactly the same thing as the syntax above, only (maybe) it’s slightly clearer.

            TestDel td2 = Test;
            td2("Hello World 2");

There is an MSDN page on named methods.

Anonymous Methods

The anonymous method syntax was introduced to avoid the need to create a separate method.  We just create the method in the same place we create the delegate.  We use the ‘delegate’ keyword as below.

            TestDel td3 = 
                delegate(string s)
            td3("Hello World 3");

Now when we invoke td3 (in the last line) the code between the curly braces executes.

One advantage of this syntax is that we can capture a local variable in the calling method without explicitly passing it into our new method.  We can form a closure.  Since in this example we don’t need to pass our string in as a parameter we use a different delegate:

        private delegate void TestDelNoParams();

We can use this as below.  Note that the message variable is not explicitly passed into our new method, but can nevertheless be used.

            string message = "Hello World 4";
            TestDelNoParams td4 = 

There is an MSDN page on anonymous methods.

Lambda Expressions

Lambda expressions were primarily introduced to support Linq, but they can be used with delegates in a very similar way to anonymous methods.

There are two basic sorts of lambda expressions.  The first type is an expression lambda.  This can only have one statement (an expression) in its method.  The syntax is below.

            TestDel td5 =  s => Console.WriteLine(s);
            td5("Hello World 5");

The second type is a statement lambda: this can have multiple statements in its method as below.

            string message2 = "Hello World 8";
            TestDel td6 =
                s => 
                    Console.WriteLine("Hello World 7");
            td6("Hello World 6");

Note that this example also shows a local variable being captured (a closure being created).  We can also capture variables with expression lambdas.

There is an MSDN page on lambda expressions.

Return Values

Nearly all of the examples above can be extended in a simple way to return a value.  The exception is expression lambda which cannot return a value. Doing this is usually an obvious change: we change our delegate signature so that the method it points to returns a value, and then we simply change the method definition to return a value as usual.  For example the statement lambda example above becomes as below.  The invocation of tdr6 now returns “Hello ” + message2, which we write to the console after the invocation returns:

            string message2 = "World 8";
            TestDelReturn tdr6 =
                s =>
                    Console.WriteLine("Hello World 7");
                    return "Hello " + message2;
            Console.WriteLine(tdr6("Hello World 6"));

The full list of all the examples above modified to return a value can be seen in the code listing in the method ExamplesWithReturnValues.


All of these syntaxes can be used to set up a method to be called when an event fires.  To add a delegate instance to an event we used the ‘+=’ syntax of course.  Suppose we define an event of type TestDel:

        private event TestDel TestDelEventHandler;

We can add a delegate instance to this event using any of the syntaxes in an obvious way.  For example, to use a statement lambda the syntax is below.  This looks a little odd, but certainly makes it easier to set up and understand event handling code.

            TestDelEventHandler += s => { Console.WriteLine(s); };
            TestDelEventHandler("Hello World 24");

Examples of setting up events using any of the syntaxes above can be found in the code listing.

Passing Delegates into Methods as Parameters: Basic Case

Similarly all of the syntaxes can be used to pass a delegate into a method, which again gives some odd-looking syntax.  Suppose we have a method as below that takes a delegate as a parameter.

        private void CallTestDel(TestDel testDel)
            testDel("Hello World 30");

Then all of the syntaxes below are valid:

            CallTestDel(new TestDel(Test));  // Named method
            CallTestDel(Test);               // Simplified named method
            CallTestDel(delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallTestDel(s => Console.WriteLine(s));  // Expression lambda
            CallTestDel(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); });  // Statement lambda

Passing Delegates into Methods as Parameters: When You Actually Need a Type of ‘Delegate’

Now suppose we have a method as below that expects a parameter of type Delegate.

        private void CallDelegate(Delegate del)
            del.DynamicInvoke(new object[] { "Hello World 31" });

The Delegate class is the base class for all delegates, so we can pass any delegate into CallDelegate.  However, because the base Delegate class doesn’t know the method signature of the delegate we can’t call Invoke with the correct parameters on the Delegate instance.  Instead we call DynamicInvoke with an object[] array of parameters as shown.

Note that there are some methods that take Delegate as a parameter in the framework (e.g. BeginInvoke on a WPF Dispatcher object).

There’s a slightly unobvious change to the ‘Basic Case’ syntax above if we want to call this method using the anonymous method or lambda expression syntax.  The code below for calling CallDelegate with an expression lambda does NOT work.

            CallDelegate(s => Console.WriteLine(s));  // Expression lambda

The reason is that the compiler needs to create a delegate of an appropriate type, cast it to the base Delegate type, and pass it into the method.  However, it has no idea what type of delegate to create.

To fix this we need to tell the compiler what type of delegate to create (TestDel in this example).  We can do this with the usual casting syntax (and a few more parentheses) as shown below.

            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda

This looks a little strange as we don’t normally need a cast when assigning a derived type to a base type, and in any case we’re apparently casting to a different type to the type the method call needs.  However, this syntax is simply to tell the compiler what type of delegate to create in the first place: the cast to the base type is still implicit.

We need to do this for any of the syntaxes apart from the very basic named method syntax (where we’re explicitly creating the correct delegate):

            CallDelegate(new TestDel(Test));  // Named method
            CallDelegate((TestDel)Test);      // Simplified named method
            CallDelegate((TestDel)delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda
            CallDelegate((TestDel)(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); }));  // Statement lambda


There is one further simplification that we can use in the examples in this article.  Instead of defining our own delegates (TestDel etc.) we can use the more generic Action and Func delegates provided in the framework.  So, for example, everywhere we use TestDel, which takes a string and returns void, we could use Action<string> instead, since it has the same signature.

Blurry Text with Small Fonts in WPF


The difficulties with text rendering in WPF have been well documented elsewhere.  However, every time I get a blurry button I can’t remember how to fix it.  This brief post shows the usual solutions.


With small font sizes the default WPF text rendering can lead to text looking very blurry.  This is particularly a problem on controls (e.g. text on buttons), where we often use small fonts.

In .Net 4 Microsoft finally gave us a solution to this problem, which is to set TextOptions.TextFormattingMode = “Display” (instead of “Ideal”).  This snaps everything to the pixel grid.  However, this mode doesn’t look good at large text sizes, so there’s no easy solution to this problem.

Other TextOptions

Other TextOptions are

1.  TextHintingMode (Animated/Auto/Fixed)

This just tells the renderer to use smoother but less clear rendering for animated text.  It won’t fix the blurry text problem for static text (which should have Auto or Fixed set).

2.  TextRenderingMode (Aliased/Auto/ClearType/GrayScale).

Tells the renderer how to draw.  This DOES affect blurriness.  In particular using Aliased rather than the default Auto can be good at small sizes.

Effect of These Options

Below, with font size 12, the first line is the default (Ideal/Auto).  This is quite blurry.  The second line is Display/Auto (less blurry) and the third line is Display/Aliased (not at all blurry, but a bit jagged).  The second two lines are the options usually used to fix the problem.

This is the same but at font size 24, which highlights the problem since now Ideal/Auto (the first line) probably is ideal:

Review of a Trading System Project


In late 2007 I wrote a series of articles on Microsoft’s Composite Application Block (CAB).  At that time I was running a team that was developing a user interface framework that used the CAB.

We’re now four years on and that framework is widely used throughout our department.  There are currently modules from eleven different development teams in production.  There are modules that do trade booking, trade management, risk management, including real-time risk management, curve marking, other market data management, and so on.  All of those were written by different teams, yet it appears to the user that this is one application.

This article will look back at the goals, design decisions, and implementation history of the project.  It will look at what we did right, what we did wrong, and some of the limitations of the CAB itself (which apply equally to its successor, Prism).

The framework is a success.  However, it’s only a qualified success.  Ironically, as we shall see, it is only a qualified success because it has been so successful.  To put that less cryptically: many of the problems with the framework have only arisen because it’s been so widely adopted.

Hopefully this article will be of interest.  It isn’t the kind of thing I usually write about and will of course be a personal view:  I’m not going to pretend I’m totally unbiased.

Design Goals

Original Overall Goals

The project had two very simple goals originally:

  1. A single client application that a user (in this case, a trader) would use for everything they need to do.
  2. Multiple development teams able to easily contribute to this application, working independently of each other.

I suspect these are the aims of most CAB or Prism projects.

Do You Actually Need a Single Client Application?

An obvious question arising from these goals is why you would need an application of this kind.

Historically there have tended to be two approaches to building big and complex trading applications:

  1. The IT department will create one huge monolithic application.  One large development team will build it all.
  2. The IT department breaks the problem up and assigns smaller development teams to develop separate applications to do each part.  This is a much more common approach than 1/.

Both of these approaches work, and both mean you don’t need a client application of the kind we are discussing.  However, neither of these approaches works very well:

  • Monolithic applications quickly become difficult to maintain and difficult to release without major regression testing.
  • Equally users don’t like having to log into many different applications.  This is particularly true if those applications are built by the same department but all behave in different ways.  It can also be difficult to make separate applications communicate with each other, or share data, in a sensible way.

So there definitely is a case for trying to create something that fulfils our original design goals above and avoids these problems.   Having said that it’s clearly more important to actually deliver the underlying functionality.  If it’s in several separate applications that matters less than failing to deliver it altogether.

More Detailed Goals

For our project we also had some more detailed goals:

  • Ease of use for the developer.  I have personally been compelled to use some very unpleasant user interface frameworks and was keen that this should not be another one of those.
  • A standardized look and feel.  The user should feel this was one application, not several applications glued together in one window.
  • Standard re-usable components, in particular a standard grid and standard user controls.  The user controls should include such things as typeahead counterparty lookups, book lookups, and security lookups based on the organization’s standard repositories for this data.  That is, they should include business functionality.
  • Simple security (authentication and authorization) based on corporate standards.
  • Simple configuration, including saving user settings and layouts.
  • Simple deployment.  This should include individual development teams being able to deploy independently of other teams.

As I’ll discuss, it was some of the things that we left off that list came to back to haunt us later on.

Goals re Serverside Communication

A further goal was use of our strategic architecture serverside, in particular for trade management.  For example, we wanted components that would construct and send messages to our servers in a standard way.  I won’t discuss the success or failure of this goal in detail here as it’s a long and chequered story, and not strictly relevant to the CAB and the user interface framework.

Technical Design

Technical Design: Technologies

The technologies we used to build this application were:

  • Microsoft C# and Windows Forms
  • Microsoft’s Patterns and Practices Group’s Composite Application Block (the CAB)
  • DevExpress’ component suite
  • Tibco EMS and Gemstone’s Gemfire for serverside communication and caching

As I’ve already discussed, this document is going to focus purely on the clientside development.

In 2007 these were logical choices for a project of this kind.  I’ll discuss some of the more detailed design decisions in the sections below.

Things We Did (Fairly) Well

As I said this is a personal view: I’m not sure all our developers would agree that all of this was done well.

Ease of Use

Designing for ease of use is, of course, quite difficult.  We have done a number of things to make the project easy to use, some of which I’ll expand on below.  These include:

  • Developers write vanilla user controls.  There’s no need to implement special interfaces, inherit from base classes or use any complex design pattern.
  • Almost all core functionality is accessed through simple services that the developer just gets hold of and calls.  So for example to show your user control you get an instance of the menu service and call a show method.  We used singleton service locators so the services could be accessed without resorting to CAB dependency injection.
  • Good documentation freely available on a wiki
  • A standard onboarding process for new teams, including setting up a template module.  This module has a ‘hello world’ screen that shows the use of the menus and other basic functionality.

Developers Not Forced to Learn the Composite Application Block (CAB)

As mentioned above, one of the key goals of the project was simplicity of use.  The CAB is far from simple to use: I wrote a 25 part introductory blog article on it and still hadn’t covered it all.

As a result we took the decision early on that developers would not be compelled to use the CAB actually within their modules.  We were keen that developers would not have to learn the intricacies of the CAB, and in particular would not have to use the CAB’s rather clunky dependency injection in their code.

However, obviously we were using the CAB in our core framework.  This made it difficult to isolate our developers from the CAB completely:

  • As mentioned above we exposed functionality to the developers through CAB services.  However we gave them a simple service locator so they didn’t have to know anything about the CAB to use these services.
  • We also used some CAB events that developers would need to sink.  However since this involves decorating a public method with an attribute we didn’t think this was too difficult.

As already mentioned, to facilitate this we wrote a ‘template’ module, and documentation on how to use it.  This was a very simple dummy module that showed how to do all the basics.  In particular it showed what code to write at startup (a couple of standard methods), how to get hold of a service, and how to set up a menu item and associated event.


We realized after a few iterations of the system that we needed a reasonably sophisticated approach to versioning and loading of components.  As a result we wrote an assembly loader.  This:

  • Allows each module to keep its own assemblies in its own folder
  • Allows different modules to use different versions of the same assembly
  • Also allows different modules to explicitly share the same version of an assembly

Our default behaviour is that when loading an assembly that’s not in the root folder, the system checks all module folders for an assembly of that name and loads the latest version found.  This means teams can release interface assemblies without worrying about old versions in other folders.

Versioning of Core Components

For core components clearly there’s some code that has to be used by everyone (e.g. the shell form itself, and menus).  This has to be backwards compatible at each release because we don’t want everyone to have to release simultaneously.  We achieve this through the standard CAB pattern of interface assemblies: module teams only access core code through interfaces that can be extended, but not changed.

However, as mentioned above, the core team also writes control assemblies that aren’t backwards compatible: teams include them in their own module, and can upgrade whenever they want without affecting anyone else.

User Interface Design

For the user interface design, after a couple of iterations we settled on simple docking in the style of Visual Studio.  For this we used Weifen Luo’s excellent docking manager, and wrote a wrapper for it that turned it into a CAB workspace.  For menuing we used the ribbon bars in the DevExpress suite.

The use of docking again keeps things simple for our developers.  We have a menu service with a method to be called that just displays a vanilla user control in a docked (or floating) window.


In large organizations it’s not uncommon for the standard client deployment mechanisms to involve complex processes and technology.  Our organization has this problem.  Early on in this project it was mandated that we would use the standard deployment mechanisms.

We tried hard to wrap our corporate process in a way that made deployment as simple as possible.  To some extent we have succeeded, although we are (inevitably) very far from a simple process.


For configuration (eventually) we used another team’s code that wrapped our centralized configuration system to allow our developers to store configuration data.  This gives us hierarchies of data in a centralized database.  It means you can easily change a setting for all users, groups of users, or an individual user, and can do this without the need for a code release.

Module Interaction

Clientside component interaction is achieved by using the standard CAB mechanisms.  If one team wants to call another team’s code they simply have to get hold of a service in the same way as they do for the core code, and make a method call on an interface.  This works well, and is one advantage of using the CAB.  Of course the service interface has to be versioned and backwards compatible, but this isn’t difficult.


For security we again wrapped our organization’s standard authentication and authorization systems so they could easily be used in our CAB application.  We extended the standard .Net Principal and Identity objects to allow authorization information to be directly accessed, and also allowed this information to be accessed via a security service.

One thing that we didn’t do so well here was the control of authorization permissions.  These have proliferated, and different teams have handled different aspects of this in different ways.  This was in spite of us setting up what we thought was a simple standard way of dealing with the issue.  The result of this is that it’s hard to understand the permissioning just by looking at our permissioning system.

Things We Didn’t Do So Well

As mentioned above, the things that didn’t go so well were largely the things we didn’t focus on in our original list of goals.

Most of these issues are about resource usage on the client.  This list is far from comprehensive: we do have other problems with what we’ve done, of course, but the issues highlighted here are the ones causing the most problems at the time of writing.

The problems included:


The Problem

We decided early on to allow each team to do threading in the way they thought was appropriate, and didn’t provide much guidance on threading.  This was a mistake, for a couple of reasons.

Threading and Exception Handling

The first problem we had with threading was the simple one of background threads throwing exceptions with no exception handler in place.  As I’m sure you know, this is pretty much guaranteed to crash the entire application messily (which in this case means bringing down 11 teams’ code).  Of course it’s easy to fix if you follow some simple guidelines whenever you spawn a background thread.  We have an exception hander that can be hooked up with one line of code and that can deal with appropriate logging and thread marshalling.  We put how to do this, and dire warnings about the consequences of not doing so, in our documentation, but to no avail.  In the end we had highly-paid core developers going through other teams’ code looking for anywhere they spawned a thread and then complaining to their managers if they hadn’t put handlers in.

Complex Threading Models

Several of our teams were used to writing serverside code with complex threading models.  They replicated these clientside, even though most of our traders don’t have anything better than a dual core machine, so any complex threading model in a workstation client is likely to be counterproductive.

Some of these models tend to throw occasional threading exceptions that are unreproducible and close to undebuggable.

What We Should Have Done

In retrospect we should have :

  • Provided some clear guidance for the use of threading in the client.
  • Written some simple threading wrappers and insisted the teams use them, horrible though that is.
  • Insisted that ANY use of threading be checked by the core team (i.e. a developer that knew about user interface threading).  The wrappers would have made it easy for us to check where threads were being spawned incorrectly (and without handlers).

Start Up

The Basic Problem

We have a problem with the startup of the system as well: it’s very slow.

Our standard startup code (in our template module) is very close to the standard SCSF code.  This allows teams to set up services and menu items when the entire application starts and the module is loaded.

This means the module teams have a hook that lets them run code at startup.  The intention here is that you instantiate a class or two, and it should take almost no time.  We didn’t think that teams would start using it to load their data, or start heartbeats, or worse, to fire off a bunch of background threads to load their data.  However, we have all of this in the system.

Of course the reason for this is that the place where this code should actually be is when a user clicks a menu item to load the team’s screen for the first time.  For heartbeats, it’s a little hard to control startup and closedown when a screen opens and closes: it’s much easier to just start your heartbeats when the application starts.  For data loading for a screen, if this is slow it becomes very obvious if it happens when a user requests a screen.

However, the impact of this happening over 11 development teams’ code is that the system is incredibly slow to start, and very very fragile at startup.  It will often spend a couple of minutes showing the splash screen and then keel over with an incomprehensible error message (or none).  As a result most traders keep the system open all the time (including overnight).  But an obvious consequence is that they are very reluctant to restart, even if they have a problem that we know a restart will fix.  Also all machines are rebooted at the weekend in our organization, so they have to sit through the application startup on a Monday morning in any case.

One further problem is that no individual team has any incentive to improve their startup speed: it’s just a big pool of slowness and you can’t tell if module X is much slower than module Y as a user.  If any one team moves to proper service creation at startup it won’t have a huge overall effect.  We have 11 teams and probably no one team contributes more than a couple of minutes to the overall startup.  It’s the cumulative effect that’s the problem.

What We Should Have Done

This is one area where we should just have policed what was going on better, and been very firm about what is and is not allowed to be run at startup.  At one stage I proposed fixing the problem by banning ANY module team’s code from running at startup, and I think if I were to build an application of this kind again then that’s what I’d do.  However, clearly a module has to be able to set up its menu items at startup (or the user won’t be able to run anything).  So we’d have to develop a way of doing this via config for this to work, which would be ugly.

One other thing that would really help would be the ability to restart an individual module without restarting the entire system.

Memory Usage

The Problem

We effectively have 11 applications running in the same process.  So with memory usage we have similar problems to the startup problems: every team uses as much memory as they think they need, but when you add it all up we can end up with instances of the system using well over 1GB of memory.  On a heavily-loaded trader’s machine this is a disaster: we’ve even had to get another machine for some traders just to run our application.

To be honest, this would be a problem for any complex trading environment.  If we had 11 separate applications doing the same things as ours the problem would probably be worse.

However, as above there’s no incentive for any individual team to address the problem: it’s just a big pool that everyone uses and no-one can see that module X is using 600MB.

What We Should Have Done

Again here better policing would have helped: we should have carefully checked every module’s memory requirements and told teams caching large amounts of data not to.  However, in the end this is a problem that is very hard to avoid: I don’t think many teams are caching huge amounts of data, it’s just there’s a lot of functionality in the client.

One thing that will help here is the move to 64-bit, which is finally happening in our organization.  All our traders have a ceiling of 4GB of memory at present (of which, as you know, over 1GB is used by Windows), so a 1GB application is a real problem.

Use of Other Dependency Injection Frameworks (Spring.Net)

The Problem

One unexpected effect of the decision not to compel teams to use the CAB was that a number of teams decided to use Spring.Net for dependency injection within their modules, rather than using the CAB dependency injection.  I have some sympathy with this decision, and we didn’t stop them.  However, Spring.Net isn’t well-designed for use in a framework of this kind and it did cause a number of problems.

  • The biggest of these is that Spring uses a number of process-wide singletons.  We had difficulties getting them to play nicely with our assembly loading.  This has resulted in everyone currently having to use the same (old) version of Spring.Net, and upgrading being a major exercise.
  • Handling application context across several modules written by different teams proved challenging.
  • If you use XML configuration in Spring.Net (which everyone does) then types in other assemblies are usually referenced using the simple assembly name only.  This invalidated some of our more ambitious assembly loading strategies.
  • The incomprehensibility associated with Spring.Net’s exception messages on initial configuration is made worse when you have multiple modules at startup.

We also had some similar problems re singletons and versioning with the clientside components of our caching technology.  Some code isn’t really compatible with single-process composite applications.

What We Should Have Done

Again we should have policed this better: many of the problems described above are solvable, or could at least have been mitigated by laying down some guidelines early on.

What I’d Change If I Did This Again

The ‘what we should have done’ sections above indicate some of the things I’d change if I am ever responsible for building another framework of this kind.  However, there are two more fundamental (and very different) areas that I would change:

Code Reviews

In the ‘what we should have done’ sections above I’ve frequently mentioned that we should have monitored what was happening in the application more carefully.  The reasons we didn’t were partially due to resourcing, but also to some extent philosophical.  Most of our development teams are of high quality, so we didn’t feel we needed to be carefully monitoring them and telling them what to do.

As you can see from the problems we’ve had, this was a mistake.  We should have identified the issues above early, and then reviewed all code going into production to ensure that there weren’t threading, startup, memory or any other issues.

Multiple Processes

The second thing I’d change is technical.  I now think it’s essential in a project of this kind to have some way of running clientside code in separate processes.  As we’ve seen many of the problems we’ve had have arisen because everything is running in the same process:

  • Exceptions can bring the process down, or poorly-written code can hang it
  • It’s hard to identify how much each module is contributing to memory usage or startup time
  • There’s no way of shutting down and unloading a misbehaving module

I think I’d ideally design a framework that had multiple message loops and gave each team its own process in which they could display their own user interface.  This is tricky, but not impossible to do well.

Note that I’d still write the application as a framework.  I’d make sure the separate processes could communicate with each other easily, and that data could be cached and shared between the processes.

As an aside a couple of alternatives to this are being explored in our organization at present.  The first is to simply break up the application into multiple simpler applications.  The problem with this is that it doesn’t really solve the memory usage or startup time problems, and in fact arguably makes them worse.  The second is to write a framework that has multiple processes but keeps the user interface for all development teams in the same process.  This is obviously easier to do technically than my suggestion above.  However for many of our modules it would require quite a bit of refactoring: we need to split our the user interface code cleanly and run it in a separate process to the rest of the module code.

Closures in C#


There seems to be some confusion about closures in C#: people are mystified as to what they are and there’s even an implication that they don’t work the way you’d expect.

As this short article will explain they are actually quite simple, and do work the way you’d expect if you’re an object oriented programmer.  The article will also briefly look at why there is confusion surrounding them, and discuss whether they are an appropriate tool in an object oriented program.


Wikipedia defines a closure as ‘a first-class function with free variables that can be bound in the lexical environment’.  What that means is that a ‘closure’ is a function that can access variables from the environment where it is declared without them being explicitly passed in as parameters.  In object-oriented programming a method in a class is a closure of sorts: it can access fields of the class directly without needing them to be passed in.  However, more usually in C# ‘closure’ refers to a function declared as an anonymous delegate that uses variables that are not explicitly passed into it, but are available at the point it is created.

Basic Example

Consider the code below:

        internal void MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'

Here we define an anonymous delegate (a function) ‘action’, and call it.  This increments x and outputs it to the console, even though x isn’t passed into it.  x is simply declared in the ‘lexical environment’ (the calling method).

‘action’ is a closure, and we can say it is ‘closed over’ x, and that x is an ‘upvalue’.  Note that it’s only a closure because of the way it uses x.  Not all anonymous delegates are closures.

By the way, in the Java community anonymous functions frequently are referred to as ‘closures’.  (The Java community has been debating whether ‘closures’ should be added to the language for some time.)

Where’s the Confusion?

The example above is pretty clear and simple.  So how has it caused confusion?

The answer is that the value of x in MyFunction after the ‘action’ call is now 2.  Furthermore, x is completely shared between MyFunction and the action delegate: any code that changes the value in one changes the value in the other.

Consider the code below:

        internal void MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'
             action();              // Outputs '4'

Here we call ‘action’, increment x in the calling method (MyFunction), and then call ‘action’ again.  Overall we started with x at 1, and incremented it 3 times, twice in our ‘action’ delegate and once in the calling method.  So it’s no surprise that the shared variable ends up with a value of 4.

This shows that we can change our ‘upvalue’ in the calling method and it is then reflected in our next call to the function: x is genuinely shared in this example.

Whilst this is a little odd (see below) it’s perfectly logical: x is shared between the calling method and any calls to our ‘action’ function.  There’s only one version of x.

This isn’t the way closures work in most functional programming languages (not that you could easily implement the example above, since all variables are immutable in functional languages).  The concept of closures has come from functional languages, so many people are surprised to find them working this way in C#.  There is more on this later.

Closures and Scope

This becomes even odder if we allow the local variable x to go out of scope (which would usually lead to it being destroyed), but retain a reference to the delegate that uses it:

        internal void Run()
             Action action = MyFunction();
             action();             // Outputs '5'

        internal Action MyFunction()
             int x = 1;
             Action action = () => { x++; Console.WriteLine(x); };
             action();              // Outputs '2'
             action();              // Outputs '4'
             return action;

Here our Run method retrieves the action delegate from MyFunction and calls it.  When it calls it the value of x is 4 (from the activity in MyFunction), so it increments that and outputs 5.  At this point MyFunction is out of scope so the local variable x would normally have been destroyed.

Again, logically this is what we’d expect, but it looks strange.

This also gives an indication that closures are hard to implement.

A Little Odd?

For an object-oriented programmer this is a little odd.  This is because we’re not used to being able to share local variables with a separate method in this way.

Normally if we want a method to act on a local variable we have to pass it in as a parameter.  If the local variable is a value type as above it gets copied, and changing it in the method will not affect its value in the calling method.  So if we wanted to use it in the calling method we’d have to pass it back explicitly as a return value or an output parameter.

Of course, there are good reasons why we don’t usually allow a method to access any variable in a calling method (quite apart from the practicalities of actually being able to do it with methods other than anonymous functions).  These are to do with encapsulation and ensuring we can maintain state in a way that’s easy to deal with.  We only really allow data to be shared at a class level, or globally if we’re using static variables, although in general we try to keep them to a minimum.

So in some ways closures of this kind break our usual object-oriented encapsulation.  My feeling is that they should be used sparingly in regular object-oriented code as a result.

Other writers have gone further than this, because if you don’t understand that an upvalue is fully shared between the anonymous function and the calling code you can get unexpected behaviour.  See, for example, this article ‘Closures in C# Can Be Evil’.


The concepts behind closures in C# are actually fairly straightforward.  However, if we use them it’s important we understand them and the effects on scope, or we may get behaviour we don’t expect.