Rich Newman

November 7, 2012

CPDOs for Beginners

Introduction

CPDOs (constant proportion debt obligations) are in the news currently.  In spite of a judge describing a CPDO issued by ABN Amro as ‘grotesquely complicated’ the basic concept behind the instrument is pretty straightforward.  This article describes the strategy behind a CPDO at a high level.

Overview of a CPDO

A CPDO is a financial instrument issued by a bank that a sophisticated investor can invest in.  To the investor CPDOs behave like bonds that pay a higher rate of interest than similar instruments issued by the bank.  So the investor gives the bank some money for a certain period, usually several years.  The bank pays a high rate of interest throughout the period and, in theory, gives the money back at the end.

To achieve this higher rate of interest the bank effectively speculates with the money they are given.  They speculate in quite a distinctive way, however.

The Basic Strategy – Sell CDS Protection to Generate Income

The first thing the bank does is to invest the money the investor has given them in something that will pay them the normal rate of interest.  If that was all they did obviously they would not be able to pay the high rate of interest to the investor.

So the bank needs a way of generating extra money.  To do this they use CDS.  I have written an earlier article on the exact mechanics of CDS, but you don’t need to know the details to understand CPDO.  What you do need to know is that CDS are like insurance contracts: if you sell protection on a CDS you receive periodic payments in return for a small chance that you will have to pay back a much larger sum.  You pay back the larger sum if the a specific company gets into financial difficulty: this is known as a ‘credit event’ or ‘default’.

Note that the banks actually use CDS indexes in CPDOs, which are CDS on a basket of companies rather than one company.  However the idea is the same.

The basic strategy is that at the start of the period the bank sells enough CDS protection to easily generate enough money to pay the high rate of interest to the investor for the period of the investment, assuming there is no need to pay anything back because of defaults.  The bank works out how much protection to sell using a set of rules that are defined in the CPDO documentation.

Periodic Rebalancing

Once they have done that they leave everything alone for a while.  After a set period of time they look at how the CPDO is doing.  At this point they may change the amount of CDS protection they are selling.  This is known as ‘rebalancing’.

The CPDO may be doing well: none of the CDS may have had credit events, for example.  In this case the bank might reduce the amount of CDS protection they are selling.

Conversely some of the CDS in the CPDO have suffered defaults.  It may be that the amount of CDS that have been sold will no longer be enough to generate the money needed to repay the investor.  In this case the bank might increase the amount of CDS protection they are selling.

The bank will do this rebalancing periodically throughout the life of the CPDO.  It is usually done every six months to coincide with the dates that the CDS indexes are updated.

Rules

Note that all of this is done according to the set of rules that are defined in advance: it’s not a judgment call.  The rules can look fairly complex.  However all they really do is describe a way of calculating ‘leverage’, which is the value of the money to be received from the risky CDS contracts outstanding versus the amount of cash they need to generate, maybe multiplied by a fixed factor. The bank will try to keep the leverage constant at every rebalancing: if the amount of cash to be generated has increased (because there have been defaults) then the value of the money from the CDS contracts needs to be increased, so we enter into more contracts.

Possible Results of the Strategy

The aim, of course, is to generate plenty of money through this speculation in CDS.  The ideal is to generate so much money that it can all be invested in relatively riskless instruments and still pay for the cashflows on the CPDO.  At that point you don’t need to speculate any more: you can ‘cash in’ the CPDO.

An alternative is that you lose so much money from paying out on the CDS that you get to the point where it’s clear you won’t be able to pay the interest rate and principal on the CPDO.  The rules governing the CPDO usually define this ‘cash out’ point: when it is reached the structure will be unwound and the bank will pay what money is left back to the investor.

A final alternative is that the ‘cash in’ or ‘cash out’ points are never reached, but and the CPDO just expires but without sufficient cash to repay the investor in full.

Martingale

Those of you who are familiar with gambling will recognize this as a simple martingale strategy.  If you lose you increase the amount you are betting.  This is in the hope that you will win next time and get all your money back.  Of course if you lose again you can increase the amount you are betting again, but you run the risk of losing substantial amounts of money.

Effects of Market Moves

There are a few other points to notice about this product:

  • The best case for the investor is to get the high rate of interest on their investment and all of their money back.  The CPDO is not speculative in the sense that the investor can make better returns if the market moves favourably.
  • The investor is selling protection in the credit markets.  The CPDO will lose money when companies get into financial difficulty.  If many get into financial difficulty simultaneously the CPDO may well suffer large losses.  This means the investor will not get much of their money back.  Of course, that’s exactly what happened with many of these instruments.
  • It’s possible to structure a CPDO such that in normal market conditions there is a good chance that the CPDO will cash in.  If the bank is simply trying to get a little bit of extra interest by speculating in CDS over a long period of time then they might usually win their bet.
  • I’ll leave it to the reader to decide whether this is really a product that anyone should have been ‘investing’ in, AAA-rated or not.

Conclusion

This article has given a very high-level overview of CPDOs.  It has inevitably glossed over some details, but hopefully it explains the basic idea.

CPDOs were invented in the credit boom, and when the crash came they lost many people a lot of money.  I doubt we shall see them again any time soon except in lawsuits.

September 8, 2012

Beginner’s Guide to Techniques for Refreshing Web Pages: Ajax, Comet, HTML5

Filed under: ajax, beginners guide, html, introduction, tutorial, web — Tags: , , , , — richnewman @ 6:07 pm

Introduction

This article briefly discusses the technologies used in modern browsers to display web pages, and goes into a little more detail about the user experience on those web pages, in particular how we can get part of a web page to refresh automatically when data changes on a web server.

Browsers and HTML

I’m sure anyone who’s reading this page is aware that the web is based on a request and response process that returns web pages of data.  If I click on a link my browser makes a request for the web page specified in the link, the request gets routed to the appropriate server, and the server responds with a page of HTML which my browser displays.

HTML (hypertext markup language), of course, is a simple markup language that tells a browser where to put text and images on a web page using tags (e.g. <header>).  The request format is a text URL (uniform resource locator) of the kind you see all the time in your browser’s navigation bar.  Furthermore, the returned text can contain additional links that the browser will show underlined and that I can click on.

Anyone who uses the internet understands this, but the success of the web is at least in part due to the simplicity of that model.  The HTML is just text with a few special tags in angle brackets, and all a browser has to do is know how to send a request, handle the response, and then draw a screen using the HTML that’s returned.  Similarly all a web server has to do is know how to receive a request, get the right HTML text to send back, and send it.  Often web servers will simply store the text in text files on their hard drive, and just load and send the right one in response to a request depending on the text of the request.

At root it’s unbelievably simple; just look what it’s turned into.

Other Technologies Used In Web Browsers

Of course modern browsers aren’t as simple as described above and there are a number of other technologies that they understand and developers can use.

Firstly, developers want to write code, so there’s also a programming language embedded into every modern browser.  This is Javascript.

Javascript allows programmers to write little bits of code that can run when events happen in the browser.  The Javascript can manipulate what’s displayed in the browser programmatically, or can perform other actions.

For the Javascript to change what’s displayed it needs to manipulate the HTML.  Obviously this can be done by simply changing the text.  However, there’s a programmatic representation of a web page that Javascript can use to manipulate elements within it.  This format is the Document Object Model or DOM.

Another baseline technology for what gets displayed to the client is Cascading Style Sheets (CSS).  These allow a common look and feel to be applied to a group of web pages without the need for detailed coding in each page.

Drawbacks of the Basic HTML Request/Response Page-Based Model

HTML + Javascript + CSS allows us to create quite sophisticated web pages.  However, there’s one big drawback with the model as described above: to display new data we have to click on a link to request a whole new page and wait whilst it loads.

For a more sophisticated user experience there are a few things we might like to have:

  1. The ability to refresh part of a web page without reloading the entire page.  Initially this could be initiated by the user clicking a button, but we want just the relevant data to update, not the entire page.
  2. The ability to do this refresh whilst allowing the user to continue to interact with the rest of the page.  That is, the request shouldn’t block the user, it should be asynchronous.
  3. The ability to update the page when data changes on the server without the user having to refresh in any way.

1.  Refreshing Part of a Web Page

The first problem that developers tried to solve was updating part of a web page in place without reloading the entire page.  There are several ways of doing this.

IFrames

However, there are some simple approaches that predate Ajax.  One is to use IFrames.  These are HTML elements within a page that can issue their own request/responses to a web site and render the results independently of the rest of the page.  They have a SRC property that can be set to a URL.  If you set the src to a different or the same URL (say on a button click) the new data will appear without a full page reload.

Many developers don’t like IFrames.  Search engines can get confused by them.  They may show scrollbars if your content doesn’t fit correctly.  Worse if your user has scrolled to the bottom of a page and then you load a new shorter page in the same frame they may be off the bottom of it.  Because of restrictions in HTML IFrames can usually only issue requests to the same site as the main site of the page they are on.  All of this means people have looked for better solutions.

Script Injection

Another approach to refreshing part of a web page is client-side script injection.  This takes advantage of the fact that Javascript code in a web page can be retrieved from a server using a URL via a src tag.

The basic approach is the same as for IFrames: we can set or reset the src of the script code, and the browser will retrieve the script from the URL and execute it.  If we send back valid Javascript that updates part of our web page, or calls a function that does, then we don’t have to refresh the entire page.

One advantage of this approach is that script tags can issue requests to any URL, not just the same site as the page they are on.  One disadvantage of this approach is that it can lead to security vulnerabilities in the code.

JSONP

JSONP is just a way of using client-side script injection across domains to get data from a different website: we request the script from the different server and it returns it as the parameters of a Javascript function which immediately executes and uses the payload.

2.  Refreshing Part of a Web Page Asynchronously

Ajax (Asynchronous Javascript and XML) is probably the primary technology for this.  Ajax is actually a label applied to a way of using many of the technologies described above to allow web pages to be displayed and then be updated asynchronously without reloading the entire page.

The main distinguishing feature of Ajax is that it uses a relatively new request response mechanism.  This is called XMLHttpRequest.  When a browser makes a request using XMLHttpRequest it provides the name of a Javascript function that will be called back by the server.  This function will have access to the data sent back from the server.

The original call to the server will not block and will not reload the page.  The user can carry on interacting with the page as usual, even if the call to the server takes some time.

It is up to the callback function to make whatever changes it needs to make to the web page using the usual Javascript techniques described above.  This will typically involve updating just a part of the screen.

One thing to note here is that the data returned is just text.  It doesn’t have to be XML, in spite of the names (XMLHttpRequest, AjaX).

3.  Updating a Page Automatically when Data Changes on the Server

Ajax as described so far updates a page in place, but only in response to a request from the web page.  This means that the user has to click a button or something similar for the page to update.

Obviously there are situations where data is changing and we would like it to update on our web page without the need for the user to manually refresh.

There are quite a few ways of doing this, some of them direct extensions to the Ajax model described above:

Polling

Javascript allows us to fire events that run code in the browser at set intervals.  So a very simple approach is to automatically request a refresh of the part of the screen we are interested in updating periodically.  We can do this using the Ajax techniques above, so that the rest of the screen remains responsive.

The drawback to this is we may make requests when no data has changed, putting unnecessary load on our servers.  Also our data on the client may well be out of date at any given time if we are between polling requests.

We really want a way for our server to send data only when it’s changed, and at the moment it has changed.

Long Polling

Another approach is long polling.  Here the browser fires off a request with a long timeout and sets up a handler for the results using Ajax as before.  However, the server itself doesn’t respond until it has data that has changed, and then it sends the data in response to the original request.  The browser handles the request and immediately sets up another long timeout request for future updates.

The disadvantage of this approach is that the server has to keep the connection (a network socket) open until it has data.  In general it will have as many connections as it has clients waiting for data.  This obviously puts load on the server and the number of sockets that the server can possibly use becomes a limiting factor.  Also this is clearly a more complex solution to implement than normal (short) polling.

Streaming

In streaming the client makes a request and the server responds with an open response that keeps the communication channel open and allows subsequent responses to be sent to the client.  The server may eventually time out the connection, or may keep it open indefinitely.  If the connection times out the client will have to make another request to refresh the data.  So this approach is like long polling with the client needing to make fewer requests.

One drawback of this approach is that many proxy servers buffer http responses until they are complete: that is, they won’t send on the message until they have the completion.  This means the client won’t get timely updates.  Another obvious drawback is that this is a fairly complex way of keeping data up to date.

With all of these approaches the call backs from the server tend to tie up one http communication channel.  As a result many approaches to solving the problem use (at least) two channels: one for polling or streaming to  update the data in place, and one for regular requests from the client to the server.

A number of commercial frameworks have been built using these techniques.

Comet

Comet is a name that’s been applied to the techniques described above to update a web page in place automatically when data changes on the server using a longlasting HTTP connection.

HTML 5 Web Sockets

HTML 5 web sockets are the new way to do bidirectional communication between a web page and a server.  They don’t use the old HTTP request/response at all, but instead set up one dedicated channel for communication between client and server.  This is fast, and the messages involve very little redundant header information, unlike conventional HTTP requests.

The main drawback of this new technology currently is that many browsers do not support it.  For example, it doesn’t work in the last version of Internet Explorer, IE9, although it works in IE10.

References

http://aspalliance.com/1391_Four_Ways_to_Locally_Update_a_Web_Page.8

http://www.xfront.com/REST-Web-Services.html

http://www.websocket.org/quantum.html

http://www.w3schools.com/jsref/tryit.asp?filename=tryjsref_button_disabled

August 25, 2012

15-Minute Beginner’s Guide to Windows 8

Filed under: beginners guide, introduction, metro, Windows, Windows 8 — Tags: , , , , — richnewman @ 5:40 pm

Introduction

Windows 8 has been released to manufacturing, and is available to developers on MSDN.  It’s quite disorientating for people who’ve worked with Windows for a while.  I’ve been playing with it and wrote some notes for myself, so I thought I’d turn them into a quick guide to navigating your way around it.

I’m going to assume you’re experienced with previous versions of Windows, you’ve managed to get Windows 8 installed, and have got past the logon screen to the start screen.  I’m also going to assume you’re a developer and therefore don’t like to reach for the mouse too much whilst working: there will be a lot of shortcut keys in this.  I’m also assuming you don’t have a touch screen.

Philosophy

The first thing to realize is that Windows 8 is intended to be both a desktop operating system (OS) and a tablet operating system.  This is logical: Microsoft need a version of Windows that can run on low-powered tablets, so they either had to write a new OS or make Windows itself capable of doing it.  They went for the latter.

However, desktop and tablet operating systems are inevitably slightly different.  Windows 8  on a PC is effectively a desktop operating system with a tablet operating system embedded in it.

Windows 8 Style User Interface (previously ‘Metro’)

The tablet part of the new OS has a new tiled user interface design, currently called the ‘Windows 8 style user interface’.  It also has tablet-style apps that run full screen.  There’s a store for the apps: it looks like Microsoft is going to pursue the proprietary locked-in approach to tablet software that other companies are using.  Apple fanboys might want to think about the effects of Apple’s approach on the industry.

Of course Windows 8 still has a full old Windows 7 style desktop within it, including all of the old desktop applications that don’t have to run full screen.

Start Screen

The start screen is the one you see in all the screenshots.

Think of the start screen as a fullscreen and more sophisticated version of the start menu in Windows 7.  It even starts in a similar way: you go to far bottom left of the screen and click.

Obviously you can click on any of the tiles to launch the new apps.  You can also click on tiles for old desktop apps, although you may need to set them up.  You can also navigate and launch apps by using the arrow keys and Enter.

You can get back to the start screen once you’ve launched an app if  you hit the Windows key, or, as already mentioned, if you move your mouse to the far bottom left and click.  Hitting the Windows key again will take you back to where you were.

Rearranging the Start Screen

You can drag tiles around on the start screen to rearrange them.  You can move the mouse to the far left or far right to scroll.  If you right-click the background to the start screen an option for ‘All Apps’ appears, and you can right-click one of these to add it to the main start screen.

You can zoom out by clicking the little minus sign in the bottom right of the screen.  This is useful if you’ve set up a lot of tiles.  It allows you to move groups of tiles around by dragging, and to name them, by right-clicking.

Search

You can just start typing the name of your application with the start screen visible.  It will immediately show a search screen and filter down to your application in a few keystrokes, after which you can just hit ‘enter’.  Again this is very similar to starting an application with the keyboard in Windows 7 via the Start menu, except it’s faster and far more powerful.

You can also bring up the search screen from the charms menu (see below), or with Windows+q.

You can search for files or settings; just use the options underneath the search box. You can also search WITHIN an app in the same way.  For example, the way to search in the Wikipedia app is to use the Windows 8 search menu: there’s no visible search functionality in the apps own screens.

You can add applications to the start screen from the search screen as well.  Find the item you want to add with the search features and right-click it.

Windows 7 Style Desktop

You can get to the old style desktop by clicking on the desktop tile on the start screen, or by hitting Windows+d.  If the desktop is already running there are other ways of getting to it: more on this later.

The only really noticeable change in the new desktop is that the Start button has disappeared, to be replaced by the start screen as discussed above.  There are a few minor improvements to the desktop as well: for example, Task Manager is far more powerful, and if you do a large file copy you get a little chart of the speed over time in the copy dialog.  Also Windows Explorer now has a ribbon interface.

Missing from the Windows 8 desktop are the gadgets that you could set up on the desktop, and the Aero glass look for the title bars of the windows.  They aren’t available.  Window title bars don’t even have a gradient, they are just solid blocks of colour.  This is the new ‘chromeless’ look: it also affects things like scrollbars and buttons.

‘Charms’ Menu

Windows+c, or move mouse to bottom right or top left brings up the so-called ‘charms’ menu from anywhere.  It slides out from the right side of the screen.  This has icons for Search, Share, Start, Devices and Settings.  Search and Start bring up the relevant screens discussed above.

Settings

The Settings icon on the charms menu lets you access the full tablet settings screen by clicking on ‘Change PC Settings’ at the very bottom.  Here you can do things like change the picture on the lock screen or change the background to the Start screen (under ‘Personalize’), or change your password (under ‘Users’).

Tablet Apps

The tablet apps need work, although some of them are already pretty good.  Obviously you visit the Store app from the start screen to browse and install additional apps.  Here’s the default weather app, which has a lot more detail if you scroll to the right:

Many of the apps have menus in them.  To bring these up right-click on the background, or use Windows+z.

You can close an app with Alt + F4 or by moving the mouse to the top middle of the screen and dragging all the way to the bottom.  You can also right-click in the left-hand slideout menu mentioned above.

You usually have to scroll in an app by moving to the bottom of the screen and using the scrollbar that appears.  There don’t appear to be any mouse gestures to scroll.

Moving Between Apps

If you move the mouse to the bottom left and then move up, or to the top left and then move down, then a slideout menu appears on the left side of the screen with all the tablet apps previewed apart from the one you are currently in.  You can click on one to go to it.  This menu treats the entire desktop as one tablet app.

You can also bring up the slideout menu and tab between apps with Windows+Tab.  This is actually a bit annoying as it doesn’t include the current app so you can’t change your mind and stay where you are.

You can move to the last tablet app you were in by moving the mouse to the top left and clicking.

You can move between all open applications, desktop plus tablet, with Alt-Tab.

Tiling Tablet Apps

You can’t actually fully tile tablet apps, but you can show a main app and have a second one in a sidebar at the left- or right-hand side of the screen.  This is called the ‘Snap’ feature.  The sidebar will stay there as you show different apps in the main window area, including if you bring up the desktop.

By default this only works on fairly high resolution screens, 1366×768 or higher, which means it won’t work on most laptops or corporate desktops unfortunately.

To set this up bring up the lefthand slideout menu (bottom left and move up with the mouse), leftclick the open app that you want in a sidebar, and drag it into the sidebar position.

You can make the desktop itself into a sidebar, in which case it shows the open desktop applications.  You can also drag the sidebar divider to the right or left, which will close the sidebar or make it the main app.

Browsers

There are actually two versions of Internet Explorer 10 in Windows 8: the tablet app version and the version that runs on the desktop.  The app version has a less easy interface, but more significantly it will only run Adobe Flash on certain websites that Microsoft has vetted as safe.  The desktop version has no such restrictions.

Old Windows Keys Combinations

Most of the useful old Windows keys combinations still work from anywhere, including in tablet apps.  So Windows+e will bring up a Windows Explorer window on the desktop from anywhere, Windows+m will go to the desktop and minimize all applications.

Start Screen Right Click Menu

If you move your mouse to the bottom left to bring up the start screen icon and then right-click instead of left-clicking you get a handy power user menu for desktop functionality.  This works from anywhere.  The menu includes options to go directly to the Explorer, Task Manager, Event Viewer, Control Panel, Search, Desktop or an admin Command Prompt.

February 7, 2012

Delegate Syntax in C# for Beginners

Filed under: .net, beginners guide, c#, code syntax, delegate — Tags: , , , , — richnewman @ 3:48 am

Introduction

I have been programming with C# since it came out but I still find the delegate syntax confusing.  This is at least partially because Microsoft have changed the recommended syntax regularly over the years.  This article is a quick recap of the various syntaxes.  It also looks at some of the issues with using them in practice.  It’s worth knowing about all the various syntaxes as you will almost certainly see all of them used.

This article is just a recap: it assumes that you know what a delegate is and why you’d want to use one.

.Net and Visual Studio Versions

The first thing to note is that you can use any of these syntaxes as long as you are using Visual Studio 2008 or later and targeting .Net 2.0 or later.

Named methods were available in .Net 1.0, anonymous methods were introduced in .Net 2.0, and lambda expressions were introduced in .Net 3.0.  However, like much of .Net 3.0, which is based on the .Net 2.0 assemblies, lambda expressions will compile to .Net 2.0 assuming you have the appropriate version of Visual Studio.

Note also that lambda expressions can do (almost) everything anonymous methods can do, and effectively supersede them as the preferred way of writing inline delegate code.

Code

A listing of the code for this article is availableThe complete working program is also available.

The Delegate

For all of these examples we need a delegate definition.  We’ll use the one below initially.

        private delegate void TestDel(string s);

Named Methods

Named methods are perhaps the easiest delegate syntax to understand intuitively.  A delegate is a typesafe method pointer.  So we define a method:

        private void Test(string s)
        {
            Console.WriteLine(s);
        }

Now we create an instance of our method pointer (the delegate above) and point it at our method.  Then we can call our method by invoking the delegate.  The code below prints out ‘Hello World 1’.  This is easy enough, but all a little cumbersome.

            TestDel td = new TestDel(Test);
            td("Hello World 1");

There’s one slight simplification we can use.  Instead of having to explicitly instantiate our delegate with the new keyword we can simply point the delegate directly at the method, as shown below.  This syntax does exactly the same thing as the syntax above, only (maybe) it’s slightly clearer.

            TestDel td2 = Test;
            td2("Hello World 2");

There is an MSDN page on named methods.

Anonymous Methods

The anonymous method syntax was introduced to avoid the need to create a separate method.  We just create the method in the same place we create the delegate.  We use the ‘delegate’ keyword as below.

            TestDel td3 = 
                delegate(string s)
                {
                    Console.WriteLine(s);
                };
            td3("Hello World 3");

Now when we invoke td3 (in the last line) the code between the curly braces executes.

One advantage of this syntax is that we can capture a local variable in the calling method without explicitly passing it into our new method.  We can form a closure.  Since in this example we don’t need to pass our string in as a parameter we use a different delegate:

        private delegate void TestDelNoParams();

We can use this as below.  Note that the message variable is not explicitly passed into our new method, but can nevertheless be used.

            string message = "Hello World 4";
            TestDelNoParams td4 = 
                delegate()
                {
                    Console.WriteLine(message);
                };
            td4();

There is an MSDN page on anonymous methods.

Lambda Expressions

Lambda expressions were primarily introduced to support Linq, but they can be used with delegates in a very similar way to anonymous methods.

There are two basic sorts of lambda expressions.  The first type is an expression lambda.  This can only have one statement (an expression) in its method.  The syntax is below.

            TestDel td5 =  s => Console.WriteLine(s);
            td5("Hello World 5");

The second type is a statement lambda: this can have multiple statements in its method as below.

            string message2 = "Hello World 8";
            TestDel td6 =
                s => 
                { 
                    Console.WriteLine(s); 
                    Console.WriteLine("Hello World 7");
                    Console.WriteLine(message2);
                };
            td6("Hello World 6");

Note that this example also shows a local variable being captured (a closure being created).  We can also capture variables with expression lambdas.

There is an MSDN page on lambda expressions.

Return Values

Nearly all of the examples above can be extended in a simple way to return a value.  The exception is expression lambda which cannot return a value. Doing this is usually an obvious change: we change our delegate signature so that the method it points to returns a value, and then we simply change the method definition to return a value as usual.  For example the statement lambda example above becomes as below.  The invocation of tdr6 now returns “Hello ” + message2, which we write to the console after the invocation returns:

            string message2 = "World 8";
            TestDelReturn tdr6 =
                s =>
                {
                    Console.WriteLine(s);
                    Console.WriteLine("Hello World 7");
                    return "Hello " + message2;
                };
            Console.WriteLine(tdr6("Hello World 6"));

The full list of all the examples above modified to return a value can be seen in the code listing in the method ExamplesWithReturnValues.

Events

All of these syntaxes can be used to set up a method to be called when an event fires.  To add a delegate instance to an event we used the ‘+=’ syntax of course.  Suppose we define an event of type TestDel:

        private event TestDel TestDelEventHandler;

We can add a delegate instance to this event using any of the syntaxes in an obvious way.  For example, to use a statement lambda the syntax is below.  This looks a little odd, but certainly makes it easier to set up and understand event handling code.

            TestDelEventHandler += s => { Console.WriteLine(s); };
            TestDelEventHandler("Hello World 24");

Examples of setting up events using any of the syntaxes above can be found in the code listing.

Passing Delegates into Methods as Parameters: Basic Case

Similarly all of the syntaxes can be used to pass a delegate into a method, which again gives some odd-looking syntax.  Suppose we have a method as below that takes a delegate as a parameter.

        private void CallTestDel(TestDel testDel)
        {
            testDel("Hello World 30");
        }

Then all of the syntaxes below are valid:

            CallTestDel(new TestDel(Test));  // Named method
            CallTestDel(Test);               // Simplified named method
            CallTestDel(delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallTestDel(s => Console.WriteLine(s));  // Expression lambda
            CallTestDel(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); });  // Statement lambda

Passing Delegates into Methods as Parameters: When You Actually Need a Type of ‘Delegate’

Now suppose we have a method as below that expects a parameter of type Delegate.

        private void CallDelegate(Delegate del)
        {
            del.DynamicInvoke(new object[] { "Hello World 31" });
        }

The Delegate class is the base class for all delegates, so we can pass any delegate into CallDelegate.  However, because the base Delegate class doesn’t know the method signature of the delegate we can’t call Invoke with the correct parameters on the Delegate instance.  Instead we call DynamicInvoke with an object[] array of parameters as shown.

Note that there are some methods that take Delegate as a parameter in the framework (e.g. BeginInvoke on a WPF Dispatcher object).

There’s a slightly unobvious change to the ‘Basic Case’ syntax above if we want to call this method using the anonymous method or lambda expression syntax.  The code below for calling CallDelegate with an expression lambda does NOT work.

            CallDelegate(s => Console.WriteLine(s));  // Expression lambda

The reason is that the compiler needs to create a delegate of an appropriate type, cast it to the base Delegate type, and pass it into the method.  However, it has no idea what type of delegate to create.

To fix this we need to tell the compiler what type of delegate to create (TestDel in this example).  We can do this with the usual casting syntax (and a few more parentheses) as shown below.

            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda

This looks a little strange as we don’t normally need a cast when assigning a derived type to a base type, and in any case we’re apparently casting to a different type to the type the method call needs.  However, this syntax is simply to tell the compiler what type of delegate to create in the first place: the cast to the base type is still implicit.

We need to do this for any of the syntaxes apart from the very basic named method syntax (where we’re explicitly creating the correct delegate):

            CallDelegate(new TestDel(Test));  // Named method
            CallDelegate((TestDel)Test);      // Simplified named method
            CallDelegate((TestDel)delegate(string s) { Console.WriteLine(s); });  // Anonymous method
            CallDelegate((TestDel)(s => Console.WriteLine(s)));  // Expression lambda
            CallDelegate((TestDel)(s => { Console.WriteLine(s); Console.WriteLine("Hello World 32"); }));  // Statement lambda

Actions/Funcs

There is one further simplification that we can use in the examples in this article.  Instead of defining our own delegates (TestDel etc.) we can use the more generic Action and Func delegates provided in the framework.  So, for example, everywhere we use TestDel, which takes a string and returns void, we could use Action<string> instead, since it has the same signature.

August 8, 2011

A Beginner’s Guide To Credit Default Swaps (Part 4)

Introduction

This post continues the discussion of changes in the credit default swap (CDS) since 2007.  Part 2 and part 3 of this series of articles discussed changes in the mechanics of CDS trading.  This part will discuss changes around how credit events are handled, and future changes in the market.

Changes in the CDS Market re Credit Events Since 2007

  • Determination committees (DCs) have been set up to work out if a credit event has occurred, and to oversee various aspects of dealing with a credit event for the market.  A ‘determination committee’ is simply a group of CDS traders of various kinds, although overseen by ISDA (the standards body). The parties to one of the new standard contracts agree to be bound by the committee’s decisions.
  • Auctions are now conducted to determine the price to cash-settle credit default swaps when there is a credit event.  For this we need to determine the current price of the bonds in default.  To do this we get a group of dealers to quote prices at which they are prepared to trade the bonds (and may have to), and then calculate the price via an averaging process.  This can get quite complicated.  The determination committees oversee these auctions.
  • Classes of events that lead to credit events have been simplified.  In particular whether ‘restructuring’ is a credit event has been standardized (although the standards are different in North America, Asia and Europe).  ‘Restructuring’ means such things as changing the maturity of a bond, or changing its currency.
  • There is now a ‘lookback period’ for credit events regardless of when a CDS is traded.  What this means is that credit events that have happened in the past 60 days (only) can trigger a contract payout.  This simplifies things because the same CDS traded on different days is now treated identically in this regard.

Terminology and a Little History

The changes described so far in this article were introduced in 2009.  For North America, which went first, this was known as ‘CDS Big Bang’.  The standard contract terms thus introduced were known as the ‘Standard North American CDS Contract’ or ‘SNAC’ (pronounced ‘snack’).  The later changes in Europe were known as the ‘CDS Small Bang’The final standardization of Asian contracts occurred later still.

Much more detail on all of this can be found on the links to the excellent MarkIt papers above.

Future Changes

Further standardization in the credit default swap market will occur as a result of the Dodd-Frank Act in the USA. This mandates that standard swaps (such as standard CDS) be traded through a ‘swap execution facility’ (SEF). It further mandates that any such trades be cleared through a central clearing house.  Europe is likely to impose a similar regulatory regime, but is behind the United States.  More detail on SEFs and clearing houses is below.

The primary aims of these changes are:

1/ Greater transparency of trading. Currently many swaps are traded over-the-counter with no disclosure other than between the two counterparties. This makes it different to assess the size of the market, or the effects of a default.

2/ Reduced risk in the market overall from the bankruptcy of one participant.

The exact details of these changes are still being worked on by the regulators.

Swap Execution Facilities (SEFs)

At the time of writing it’s not even clear exactly what a ‘SEF’ is.  The Act defines a SEF as a “facility, trading system or platform in which multiple participants have the ability to execute or trade Swaps by accepting bids and offers made by other participants that are open to multiple participants”. That is, a SEF is a place where any participant can see and trade on current prices. There are some additional requirements of SEFs relating to providing public data relating to price and volume, and preventing market abuses.

In many ways a SEF will be very similar to an existing exchange. As mentioned the exact details are still being worked on.

A number of the existing electronic platforms for the trading of CDS are likely to become SEFs.

Clearing Houses

Central clearing houses are another mechanism for reducing risk in a market.

When a trade is done both parties to the trade can agree that it will be cleared through a clearing house.  This means that the clearing house becomes the counterparty to both sides of the trade: rather than bank A buying from bank B, bank A buys from the clearing house, and bank B sells to the clearing house.

Obviously the clearing house has no risk from the trades themselves.  The clearing house is exposed to the risk that either bank A or bank B goes bankrupt and thus can’t pay its obligations from the trade.  To mitigate this the clearing house will demand cash or other assets from both banks A and B.  This is known as ‘margin’.

The advantage of this arrangement is that the clearing house can guarantee that bank A will be unaffected even if bank B goes bankrupt.  The only counterparty risk for bank A is that the clearing house itself goes bankrupt.  This is unlikely since the clearing house will have no market risk, be well capitalized, and demands margin for all transactions.

Clearing houses and exchanges are often linked (and may be the same entity), but they are distinct concepts: the exchange is the place where you go to get prices and trade, the clearing house deals with the settlement of the trade. Usually clearing houses only have a restricted number of ‘members’ who are allowed to clear trades. Anyone else wanting clearing services has to get them indirectly through one of these members.

At the time of writing there are already a few central clearing houses for credit default swaps in operation, and more are on the way.

Conclusion

Since 2007 contracts for credit default swaps have been standardized.  This has simplified the way in which the market works overall: it’s reduced the scope for difficulties when a credit event happens, simplified the processing of premium payments, and allowed similar CDS contracts to be netted together more easily.  At the same time it has made understanding the mechanics of the market more difficult.

Further changes are in the pipeline for the CDS market to use ‘swap execution facilities’ and clearing houses.

August 4, 2011

A Beginner’s Guide to Credit Default Swaps (Part 3)

Introduction

Part 1 of this series of articles described the basic mechanics of a credit default swap.

Part 2 started to describe some of the changes in the market since part 1 was written.  This part will continue that description by describing the upfront fee that is now paid on a standard CDS contract, and the impact of the changes on how CDS are quoted in the market.

Standard Premiums mean there is a Fee

Part 1 discussed how CDS contracts have been standardized.  One of the ways in which they have been standardized is that there are now standard premiums.

Now consider the case where I buy protection on a five-year CDS.  I enter into a standard contract with a premium of 500 basis points (5%).  It may be that the premium I would have paid under the old nonstandard contract for the same dates and terms would have been 450 basis points.  However, now I’m paying 500 basis points.

Clearly I need to be compensated for the 50 bps difference or I won’t want to enter into the trade under the new terms.

As a result an upfront fee is paid to me when the contract is started.  This represents the 50 basis points difference over the life of the trade, so that I am paying the same amount overall as under the old contract.

Note that in this case I (the protection buyer) am receiving the payment, but it could easily be that I pay this upfront fee (if, for example, the nonstandard contract would have traded at 550 bps).

Upfront Fee Calculation

The calculation of the fee from the ‘old’ premium (spread) is not trivial.  It takes into account discounting, and also the possibility that the reference entity will default, which would mean the premium would not be paid for the full life of the trade.  However, this calculation too has been standardized by the contracts body (ISDA).  There is a standard model that does it for us.

The Full First Coupon means there is a Fee

In the example in part 1 I discussed how I might pay for a full three months protection at the first premium payment date for a CDS trade, even though I hadn’t had protection for three months.

Once again I need compensation for this or I will prefer to enter into the old contract.  So once again there is a fee paid to me when I enter into the trade.

This is known as an ‘accrual payment’ because of the similarity to accrued interest payment for bonds.  Here the calculation is simple: it’s the premium rate applied to the face value of the trade for the period from the last premium payment date to the trade date.

That is, it’s the amount I’ll be paying for protection that I haven’t received as part of the first premium payment.  Note no discounting is applied to this.

Upfront Fee/Accrual Payment

So in summary the new contract standardization means that a payment is now always made when a standard CDS contract is traded.

Part of the payment is the upfront fee that compensates for the difference between the standard premium (100 or 500 bps in North America) and the actual premium for the trade.  This can be in either direction (payment from protection buyer to seller or vice versa).  Part of the payment is the accrual payment made to the protection buyer to compensate them for the fact that they have to make a full first coupon payment.

How CDS are Quoted in the Market

Prior to these changes CDS were traded by simply quoting the premium that would be paid throughout the life of the trade.
With the contract standardization clearly the premium paid through the life of the trade will not vary with market conditions (it will always be 100 or 500 bps in North America, for example), so quoting it makes little sense.

Instead the dealers will quote one of:

a) Points Upfront
‘Points upfront’ or just ‘points’ refer to the upfront fee as a percentage of the notional.  For example, a CDS might be quoted as 3 ‘points upfront’ to buy protection.  This means the upfront fee (excluding the accrual payment) is 3% of the notional.  ‘Points upfront’ have a sign: if the points are quoted as a negative then the protection buyer is paid the upfront fee by the protection seller.  If the points are positive it’s the other way around.

b)  Price
With price we quote ‘like a bond’. We take price away from 100 to get points:
That is, points = 100 – price.  So in the example above where a CDS is quoted as 3 points to buy protection, the price will be 97.   The protection buyer still pays the 3% as an upfront fee of course.

c)  Spread
Dealers are so used to quoting spread that they have carried on doing so in some markets, even for standard contracts that pay a standard premium.  That is they still quote the periodic premium amount you would have been paying if you had bought prior to the standardization.  As already mentioned, there is a standard model for turning this number into the upfront fee that actually needs to be paid.

Conclusion

This part concludes the discussion of the changes in the mechanics of CDS trading since 2007.  As you can see, in many ways the standardization of the CDS market has actually made it more complicated.  The things to remember are that premiums, premium and maturity dates, and the amounts paid at premium dates have all been standardized in a standard contract.  This has meant there is an upfront fee for all standard CDS, and that they are quoted differently in the market from before.  It has also meant that CDS positions can be more easily netted against each other, and that the mechanics of calculating and settling premiums have been simplified.

Part 4 of this series will examine some of the other changes since 2007, and changes that are coming.

July 19, 2011

A Beginner’s Guide to Credit Default Swaps (Part 2)

Introduction

Part 1 of the ‘Beginner’s Guide to Credit Default Swaps’ was written in 2007. Since that time we have seen what many are calling the greatest financial crisis since the Great Depression, and a global recession.

Rightly or wrongly, some of the blame for the crisis has been attributed to credit derivatives and speculation in them.  This has led to calls for a more transparent and better regulated credit default swap (CDS) market. Furthermore the CDS market has grown very quickly, and by 2009 it had become clear that some simple changes to operational procedures would benefit everyone.

As a result many changes in the market have already been implemented, and more are on the way. This article will discuss these changes.  It will focus primarily on how the mechanics of trading a credit default swap have changed, rather than the history of how we got here or why these changes have been made. I’ll also briefly discuss the further changes that are on the way.

Overview of the Changes

The first thing to note is that nothing has fundamentally changed from the description of a credit default swap in part 1. A credit default swap is still a contract that provides a kind of insurance against a company defaulting on its bonds. If you have read and understood part one then you should understand how a credit default swap works.

The main change that has happened is that credit default swap contracts have been standardized. This standardization falls into three broad categories:

  1. Changes to the premium, premium and maturity dates, and premium payments that simplify the mechanics of CDS trading.
  2. Changes to the processes around identifying whether a credit event has occurred.
  3. Changes to the processes around what happens when a credit event has occurred.

Items 2 and 3 are extremely important, and have removed many of the problems that were discussed in part 1 relating to credit events. However, they don’t affect the way credit default swaps are traded as fundamentally as item 1, and are arguably more boring, so we’ll start with item 1.

The Non-Standard Nature of Credit Default Swaps Previously

If I buy 100 IBM shares and then buy 100 more I know that I have a position of 200 IBM shares.  I can go to a broker and sell 200 IBM shares to get rid of (close out) this position.

One of the problems with credit default swaps (CDS) as described in part 1 of this series of articles is that you couldn’t do this.  Every CDS trade was different, and it was consequently difficult to close out positions.

Using the description in part 1, consider the case where I have some senior IBM bonds.  I have bought protection against IBM default using a five year CDS.  Now I decide to sell the bonds and want to close out my CDS.  It’s difficult to do this by selling a five year CDS as described previously.  Even if I can get the bonds being covered, the definition of default, the maturity date and all the premium payment dates to match exactly it’s likely that the premiums to be paid will be different from those on the original CDS.  This means a calculation has to be done for both trades separately at each premium payment date.

Standardization

To address this issue a standard contract has been introduced that has:

1.  Standard Maturity Dates

There are four dates per year, the ‘IMM dates’ that can be the maturity date of a standard contract: 20th March, 20th June, 20th September, and 20th December.  This means that if today is 5th July 2011 and I want to trade a standard five-year CDS I will normally enter into a contract that ends 20th September 2016.  It won’t be a standard CDS if I insist my maturity date has to be 5th July 2016.

2.  Standard Premium Payment Dates

The same four dates per year are the dates on which premiums are paid (and none other).  As a result three months of premium are paid at every premium payment date.

Note that the use of IMM dates for CDS maturity and premium payment dates was already common when I wrote part 1 of the article.

3.  Standard Premiums

In North America, standard contracts ONLY have premiums of 100 or 500 basis points per annum (1% or 5%).  In Europe, Asia and elsewhere a wider range of premiums is traded on standard contracts, although this is still restricted.  How this works in practice will be explained in part 3.

4.  Payment of Full First Coupon

Standard contracts pay a ‘full first coupon’.  What this means is that if I buy a CDS midway between the standard premium payment dates I still have to pay a full three months’ worth of premium at the next premium date.  Note that ‘coupon’ here means ‘premium payment’.

For example, if I enter into a CDS with face value $100m on 5th July 2011 with a premium of 5% I will have to pay 3 months x 5% x 100m on the 20th September.  This is in spite of the fact that I have not been protected against default for the full three months.

Note that for the standard premiums and the payment of full first coupon to work we now have upfront fees for CDS.  Again this will be explained in more detail in part 3.

Impact of these Changes

What all this means is that we have fewer contract variations in the market.  The last item in particular means that a position in any given contract always pays the same amount at every premium date: we don’t need to make any adjustments for when the contract was traded.

In fact, in terms of the amount paid EVERY contract with the same premium (e.g. 500 bps) pays the same percentage of face value at a premium date, regardless of reference entity.  This clearly simplifies coupon processing.  It also allows us to more easily net positions in credit default swaps in our systems.

Conclusion

One of the major changes in the CDS market since part 1 was written is that contracts have been largely standardized.  More detail on this and other changes will be given in part 3.

March 2, 2008

Model-View-Presenter using the Smart Client Software Factory (Introduction To CAB/SCSF Part 25)

Introduction

Part 23 and part 24 of this series of articles described the Model-View-Presenter pattern.

This article explains how the Smart Client Software Factory supports this pattern by generating appropriate classes.

Guidance Automation Packages in the Smart Client Software Factory

We saw how we could use the Smart Client Application Guidance Automation Package to set up a Smart Client Application in part 18. We can also set up a Model-View-Presenter pattern in a Smart Client application using another of the Guidance Automation Packages.

This will only work in an existing Smart Client Application.

Running the Model-View-Presenter Package

To use the Guidance Automation Package we right-click in Solution Explorer on a project or folder where we want to run the package. It is intended that we do this in the Views folder in a business module. On the right-click menu we select ‘Smart Client Factory/Add View (with presenter)’. We get a configuration screen that lets us name our view, and also lets us put the classes that get created into a folder. For the purposes of this example we name our view ‘Test’, and check the checkbox that says we do want to create a folder for the view.

When we click ‘Finish’ we get three classes and a TestView folder as below:

mvpsolutionexplorer.jpg

Classes Created

  1. TestView
    This is (obviously) our View class. It is intended that this contain the auto-generated code to display the View. As discussed in the previous articles any complex view logic will not go into this class, but will go into the Presenter.
  2. TestViewPresenter
    This is our Presenter class. As discussed in previous articles this should contain logic to deal with user events. It should also contain any complex view logic, and should directly update the View with the results of an view logic calculations. It has access to the View class via an interface.
  3. ITestView
    This is the interface that the View implements. The Presenter can only update the View through this interface.

Diagram

In terms of the diagrams shown in parts 23 and 24 this looks as below. Remember that we may or may not have arrows between the Model and the View depending on whether we are using the active View or passive View version of Model-View-Presenter:

mvpdiagram2.jpg

Where’s the Model?

The Guidance Automation package does not set up a Model class for us. As we have seen, the Model has no direct references to a View/Presenter pair (it raises events), and there may be multiple View/Presenter pairs for one Model. Further the Model would not usually be in the same folder, or even in the same component, as our View and Presenter.

For these reasons we are expected to set up our Model classes separately by hand.

Note that the Presenter (and the View as well if we are using the active View pattern) will have a direct reference to the Model. We will have to add these references manually.

Active and Passive View: a Quick Recap

Remember that in Model-View-Presenter the Presenter updates the View via an interface. We can set this up so only the Presenter is allowed to update the View. This is the ‘passive View’ pattern. We can also set this up so that the Presenter can update the View in complex cases, but the View can also update itself (in response to an event or user request) in simple cases. This is the ‘active View’ pattern.

Active and Passive View: Which Should We Use?

The pattern described in the SCSF documentation is the passive View: the documentation implies that all updates to the View should be done by the Presenter.

However there is nothing to stop us using the active View pattern with the classes generated by the Guidance Automation Package. We can add code to update the View wherever we like. In fact I would recommend using active View in simple cases: passive View should only be used where we are putting too much logic into the View class.

Should We Use Model-View-Presenter for Every Screen? A Personal View

Let me also reiterate a point made in part 24. It’s easy to get obsessive about the use of patterns and use them everywhere without thinking. My personal opinion is that we should only use the full Model-View-Presenter pattern where we have a complex screen that will benefit from the separation of the View and Presenter classes. For very basic screens the pattern is really too complex to give us benefit. In simple cases I think it is fine to put event handling and screen update logic directly behind the screen.

Note that I don’t think this applies to the use of the Model. We should always separate out the business logic from our screens into separate classes (this is what Martin Fowler calls ‘Separated Presentation’). However, we frequently have screens that don’t show any business logic or business data, so we may not need a Model class either.

For example an About screen that just shows the system name and version won’t need separate View and Presenter classes, and probably won’t need anything in a Model class either.

Equally a screen that shows a read-only grid of currencies used in a trading system probably doesn’t need separate View and Presenter classes. In this case the currencies themselves should be in a Model class so that other screens can access them.

Implementation Details: What We’d Expect

If we examine the diagram above, we expect the Presenter to have a data member with type of our ITestView interface that it will use to access the View. We expect the View to implement the ITestView interface to allow this. We further expect the View to have a direct reference to the Presenter class (a data member), which it will use to invoke code relating to user events. We’d probably expect both the View and the Presenter classes to be created the first time the View is needed.

Implementation Details: the Base Presenter Class

The actual details of the implementation of the Presenter are a little unusual.

If we look at the code generated by the Guidance Automation Package we see that the TestViewPresenter above has been given its core functionality by inheriting from an abstract Presenter<TView> class. Remember that the generic ‘TView’ simply lets us provide a type whenever we use the Presenter class. Here we inherit from Presenter, and provide the type when we inherit:

    public partial class TestViewPresenter : Presenter<ITestView>
    {

This allows the base Presenter class to have a data member of type ITestView (which is what we expect), rather than it being directly in the TestViewPresenter class. Note that the base Presenter is in the Infrastructure.Interface project (which is one of the reasons why we have to use this pattern in a Smart Client application).

The base Presenter class exposes our ITestView data member publicly, contains a reference to our WorkItem, and has disposal code and a CloseView method. It also has virtual OnViewReady and OnViewSet methods. These get called when you’d expect from the name and let us respond at the appropriate times by overriding the methods in our TestViewPresenter class.

All the above functionality in the base Presenter class means that the derived TestViewPresenter class is basically empty when it is created. It is up to us to put logic in there to handle user events and complex view logic.

The TestView class is a normal user control. It implements ITestView and contains a reference to the TestViewPresenter as we’d expect. It also calls OnViewReady as appropriate (in the OnLoad event of the user control). Again other than this TestView is also basically empty.

Conclusion

This article has shown us how to set up Model-View-Presenter classes using the Smart Client Software Factory, and discussed some issues surrounding it.

December 12, 2007

SCSF Business Modules: Start Up and the ControlledWorkItem (Introduction to CAB/SCSF Part 20)

Introduction

Part 19 of this series of articles discussed business modules in a Smart Client solution generated using the Smart Client Software Factory. This article continues that discussion.

The Load Method of a Business Module

As discussed in the previous article, a business module has a class called ‘Module’ which inherits from class ModuleInit. We saw in part 1 of this series of articles that this means the Load method in that class will get called at start up, provided the module has been added to the ProfileCatalog file.

The Load method of Module generated by the Smart Client Software Factory is as below:

        public override void Load()
        {
            base.Load();
 
            ControlledWorkItem<ModuleController> workItem = _rootWorkItem.WorkItems.AddNew<ControlledWorkItem<ModuleController>>();
            workItem.Controller.Run();
        }

As we can see, it’s creating a ControlledWorkItem class instance and adding it to the WorkItems collection of the root WorkItem. It’s then calling the Run method on the Controller property of this WorkItem.

ControlledWorkItem

ControlledWorkItem is a class that inherits directly from WorkItem. So a ControlledWorkItem is a WorkItem. The ControlledWorkItem also adds additional functionality to the WorkItem, and, crucially, it is a sealed class (which means we can’t inherit from it).

The idea here is that each business module should have a ControlledWorkItem as a root for its functionality. This is what we are creating in the Load method. In the overall WorkItem hierarchy each business module ControlledWorkItem is immediately below the root WorkItem for the entire solution.

Inheriting WorkItem to add Functionality

The ControlledWorkItem has been created to clarify the situation with regard to adding code to WorkItems. When we start using the CAB we quickly find that we need our WorkItems to be extended in various ways. They are intended to control business use cases, after all. For example we may want specific services instantiated at start up and added to the Services collection. Doing this in the WorkItem itself may seem like a sensible thing to do. Clearly the main WorkItem class is a CAB framework class, but we can inherit from it to give it this additional behaviour.

The reference implementations of both the CAB and the SCSF do this: each WorkItem inherits from the base WorkItem class and extend it to give the use case functionality. If you look at the CustomerWorkItem in the Bank Teller Reference Implementation you’ll see this.

Why Inheriting from WorkItem has been Deprecated

The difficulty with this is that our WorkItem class is acting as both a container for all the various WorkItem collections, as we have discussed before, AND as a place where all the code for a business use case goes.

This breaks the Single Responsibility principle, which is that every class should have just one responsibility in a system to avoid confusion.

As a result the Patterns and Practices team have decided it’s not ideal to have developers inherit from WorkItem and add functionality to the derived class. Instead a second class is created to contain the new code, and that class is associated with the WorkItem class by composition.

How ControlledWorkItem Addresses the Problem

This is what the ControlledWorkItem is doing. The ControlledWorkItem class itself inherits from WorkItem, but also has a member variable that references another class. The type of this class is generic (so the developer provides it), and the class is instantiated when the ControlledWorkItem is created.

So in the line of code below we are creating the ControlledWorkItem and adding it to the root WorkItem’s WorkItems collection. However we are also telling the ControlledWorkItem that its member class should be of type ModuleController, and that class will get instantiated and set up as the member variable.

ControlledWorkItem<ModuleController> workItem = _rootWorkItem.WorkItems.AddNew<ControlledWorkItem<ModuleController>>();

We are not expected to inherit from ControlledWorkItem itself. In fact we can’t because it is sealed: the Patterns and Practices team have done this deliberately to indicate that the pattern has changed. Instead we add our additional functionality for the WorkItem to the ModuleController class.

ModuleController

We can access the ModuleController instance from the ControlledWorkItem using the Controller property. We can then call a Run method on that class. This is the standard pattern that is generated by the Guidance Automation Package: note that the final line in the Load method above is:

workItem.Controller.Run();

So we can add start up code for the WorkItem into the ModuleController class in the Run routine.

The SCSF gives us a default ModuleController whenever we set up a Module, as we have seen. This has a default Run method. There isn’t any code that actually does anything in this method, but four empty methods are set up in ModuleController to indicate to us the sort of things we should be doing:

    public class ModuleController : WorkItemController
    {
        public override void Run()
        {
            AddServices();
            ExtendMenu();
            ExtendToolStrip();
            AddViews();
        }
...

There are also comments in these routines to describe what we should be doing inthem. To see this in more detail look in any of the ModuleController classes in the sample code.

WorkItemController Class

Note also above that our default ModuleController inherits from a class called WorkItemController, which is an abstract base class intended to be used just of these controllers. Inheriting from this ensures that we have a Run method in our derived class as there is an abstract function of this name in the base class.

The base WorkItemController also gets a reference to the associated WorkItem using our usual dependency injection pattern. This can be accessed via the WorkItem property on the WorkItemController class.

Finally the WorkItemController class has two overloaded ShowViewInWorkspace methods, which can create and show a SmartPart in a named Workspace in the WorkItem.

Obviously we don’t have to make our ModuleController inherit from WorkItemController. However, if we don’t this base class functionality will not be available.

Conclusion

This article has discussed the standard patterns generated by the Smart Client Software Factory for starting up business (and other) modules.

Part 21 of this series of articles will look briefly at foundational modules, and will also discuss the way names are handling in Smart Client Software Factory projects.

December 9, 2007

Business Modules and Interfaces in the SCSF Smart Client Solution (Introduction to CAB/SCSF Part 19)

Introduction

Part 18 gave a brief introduction to the Smart Client Software Factory. This article continues that discussion by looking at business modules, and also examining how the various modules in a Smart Client solution are expected to interact.

Recap on the Smart Client Application

In part 18 we saw that a ‘Guidance Automation’ package in the Smart Client Software Factory lets you create a base solution for a smart client program. It sets up four projects, three of which are infrastructure projects.

One of the projects is an empty ‘Infrastructure.Module’ project. Infrastructure.Module is a CAB module as described earlier in this series of articles: it isn’t directly referenced by the other projects in the solution but can be used to write infrastructural code for the solution without any tight-coupling with the rest of the solution. We’ll examine this in a little more detail below.

Business Modules

It isn’t intended that we put business logic into the Infrastructure projects discussed above. Instead we are meant to create ‘business modules’.

To create a business module we use another of the Guidance Automation packages: we right-click the solution in Solution Explorer, select Smart Client Factory/Add Business Module (C#), click ‘OK’ in the ‘Add New Project’ window and then click ‘Finish’ in the ‘Add Business Module’ window.

This gives us two new projects in the solution with default names Module1 and Interface.Module1 as below:

scsfprojectmodule.jpg

Once again here Module1 is a Composite Application Block module, and is not referenced by any other project in the solution. However, Module1.dll IS added to the ProfileCatalog (which is in Shell). This means that the Load method of a class inheriting ModuleInit in Module1 will get called by the CAB at start up, as described in part 1 of this series of articles. The class with the Load method in Module1 is called ‘Module’. We’ll look at what the Load method is doing in the next article in this series.

Note here that the Module and ModuleController classes are identical to those in Infrastructure.Module. Note also that there’s really no code at all in Module1.Interface: there are just some empty classes in a folder called Constants.

Business Module Interaction with the Rest of the Smart Client Project

As discussed in part 1 of this series, a ‘module’ is a standalone project to be used in a composite user interface. So our business module here is intended to be a slice of business functionality that can potentially be developed independently of the other modules in the application. Because the business module isn’t directly referenced by other modules a separate development team could potentially work on it and change it. It can then in theory be plugged in to the containing framework without the need for code changes in the framework. The other project’s libraries might not even need to be recompiled since they don’t actually reference the business module directly.

Clearly in practice it’s likely that the business module will have to interact with the rest of the Smart Client solution on some level. There will be a need for:

  1. The business module to use the infrastructure components: for example it might need to put a toolstrip into the Shell form.
  2. Other components in the Smart Client solution to use some of the business module functionality. As a simple example we might have a business module that deals with customers and a back-end customer database. It might have screens to show customer data and allow updates. Another business module might want to display these screens in response to a request: an Orders module might allow a double-click on a customer name to show the customer.

We want to achieve the interaction described above in a way that’s as loosely-coupled as possible, so that we can change the system easily. To do this we make sure that all interaction is through the Interface projects.

We now examine each of these possible scenarios in more detail:

1. The Business Module Using Infrastructure Components

For this scenario in our example solution Module1 references Infrastructure.Interface directly. It is set up to do this by default when you add the business module to the solution. Note that Infrastructure.Interface is intended to (mainly) contain .NET interfaces: it is not meant to contain large amounts of code.

Note that Module1 does not reference Infrastructure.Module or Infrastructure.Library directly, nor should it under any circumstances. These projects may well be under the control of a separate development team from our business module team, and they may need to be updated independently of the business modules. So we reference the interface project, and that handles our interaction with the Infrastructure libraries.

This seems to be a concept that developers working on these projects have difficulty with: almost every member of my development team at work has added one of these libraries to a business module at some stage.

I think the confusion arises because it’s not necessarily obvious how we do this. If my module just references an interface how can I actually call any functionality using just the interface? The answer is that we are once again using the dependency inversion and dependency injection concepts described in part 3 and part 4 of this series of articles.

An example here may help.

Example

We’ll use the WorkspaceLocator service that the SCSF adds into the Infrastructure.Library component when we create a Smart Client solution. The WorkspaceLocator service lets you find the Workspace a SmartPart is being displayed in, although this isn’t relevant for this discussion: all we’re interested in is how to invoke the service from a business module.

There’s a class called WorkspaceLocator that actually does the work in SmartClientDevelopmentSolution.Infrastructure.Library.Services. There’s also an interface in Infrastructure.Interface as below:

namespace SmartClientDevelopmentSolution.Infrastructure.Interface.Services
{
    public interface IWorkspaceLocatorService
    {
        IWorkspace FindContainingWorkspace(WorkItem workItem, object smartPart);
    }
}

Note that Infrastructure.Library references Infrastructure.Interface and so WorkspaceLocator can implement this interface. Note also that our business module, Module1, also references Infrastructure.Interface but NOT Infrastructure.Library. So it can’t see the WorkspaceLocator class directly and thus can’t call FindContainingWorkspace on it directly. So how do we use the service?

The answer is that this is the standard CAB dependency inversion pattern using WorkItem containers to access objects.

At start up the solution creates an instance of the WorkspaceLocator service and adds it into the Services collection of the root WorkItem, referencing it by the type of the interface:

RootWorkItem.Services.AddNew<WorkspaceLocatorService, IWorkspaceLocatorService>();

This actually happens in the new SmartClientApplication class mentioned in part 18, but all we really need to know is that the service will be available on the root WorkItem.

Now, in our module we know we can get a reference to the root WorkItem in our new module by dependency injection in a class:

        private WorkItem _rootWorkItem;
 
        [InjectionConstructor]
        public Module([ServiceDependency] WorkItem rootWorkItem)
        {
            _rootWorkItem = rootWorkItem;
        }

Our module also knows about the IWorkspaceLocator interface since it references Infrastructure.Interface. So it can retrieve the WorkspaceLocator service object from the root WorkItem using the interface, and can then call the FindContainingWorkspace method on that object:

            IWorkspaceLocatorService locator = _rootWorkItem.Services.Get<IWorkspaceLocatorService>();
            IWorkspace wks = locator.FindContainingWorkspace(_rootWorkItem, control);
            MessageBox.Show("Workspace located: " + wks.ToString());

In summary, as long as our module knows the interface to the functionality it needs, and knows how to retrieve an object that implements that interface from a WorkItem collection of some kind, it doesn’t need to have direct access to the underlying class to use the object. This was explained in more detail in earlier articles in this series.

2. Other Components Using the Business Module Functionality

For other components to use our business module functionality we are expected to work out what functionality our business module should expose to the rest of the solution. We should then define interfaces that allow access to that functionality and put them into our Module1.Interface component.

Other components in the solution can then reference Module1.Interface and call the functionality. Note that to allow them to do this we need to ensure that the correct objects are available in a WorkItem, as described above. Once again other components should NOT reference Module1. We can then change Module1 without impacting the other components.

We may of course need to change the interfaces. In this case it may be sensible to retain the old version of the interface component itself so not all other components have to upgrade, and to add a new version with the changed interfaces in as well. The old interface can then be disabled when everyone has upgraded.

Conclusion

This article has examined modules in a Smart Client solution, and discussed how they should interact.

Part 20 of this series of articles will look in a little more detail at some of the new code structures in modules in a Smart Client solution.

Older Posts »

Create a free website or blog at WordPress.com.