Beginner’s Guide to Techniques for Refreshing Web Pages: Ajax, Comet, HTML5

Introduction

This article briefly discusses the technologies used in modern browsers to display web pages, and goes into a little more detail about the user experience on those web pages, in particular how we can get part of a web page to refresh automatically when data changes on a web server.

Browsers and HTML

I’m sure anyone who’s reading this page is aware that the web is based on a request and response process that returns web pages of data.  If I click on a link my browser makes a request for the web page specified in the link, the request gets routed to the appropriate server, and the server responds with a page of HTML which my browser displays.

HTML (hypertext markup language), of course, is a simple markup language that tells a browser where to put text and images on a web page using tags (e.g. <header>).  The request format is a text URL (uniform resource locator) of the kind you see all the time in your browser’s navigation bar.  Furthermore, the returned text can contain additional links that the browser will show underlined and that I can click on.

Anyone who uses the internet understands this, but the success of the web is at least in part due to the simplicity of that model.  The HTML is just text with a few special tags in angle brackets, and all a browser has to do is know how to send a request, handle the response, and then draw a screen using the HTML that’s returned.  Similarly all a web server has to do is know how to receive a request, get the right HTML text to send back, and send it.  Often web servers will simply store the text in text files on their hard drive, and just load and send the right one in response to a request depending on the text of the request.

At root it’s unbelievably simple; just look what it’s turned into.

Other Technologies Used In Web Browsers

Of course modern browsers aren’t as simple as described above and there are a number of other technologies that they understand and developers can use.

Firstly, developers want to write code, so there’s also a programming language embedded into every modern browser.  This is Javascript.

Javascript allows programmers to write little bits of code that can run when events happen in the browser.  The Javascript can manipulate what’s displayed in the browser programmatically, or can perform other actions.

For the Javascript to change what’s displayed it needs to manipulate the HTML.  Obviously this can be done by simply changing the text.  However, there’s a programmatic representation of a web page that Javascript can use to manipulate elements within it.  This format is the Document Object Model or DOM.

Another baseline technology for what gets displayed to the client is Cascading Style Sheets (CSS).  These allow a common look and feel to be applied to a group of web pages without the need for detailed coding in each page.

Drawbacks of the Basic HTML Request/Response Page-Based Model

HTML + Javascript + CSS allows us to create quite sophisticated web pages.  However, there’s one big drawback with the model as described above: to display new data we have to click on a link to request a whole new page and wait whilst it loads.

For a more sophisticated user experience there are a few things we might like to have:

  1. The ability to refresh part of a web page without reloading the entire page.  Initially this could be initiated by the user clicking a button, but we want just the relevant data to update, not the entire page.
  2. The ability to do this refresh whilst allowing the user to continue to interact with the rest of the page.  That is, the request shouldn’t block the user, it should be asynchronous.
  3. The ability to update the page when data changes on the server without the user having to refresh in any way.

1.  Refreshing Part of a Web Page

The first problem that developers tried to solve was updating part of a web page in place without reloading the entire page.  There are several ways of doing this.

IFrames

However, there are some simple approaches that predate Ajax.  One is to use IFrames.  These are HTML elements within a page that can issue their own request/responses to a web site and render the results independently of the rest of the page.  They have a SRC property that can be set to a URL.  If you set the src to a different or the same URL (say on a button click) the new data will appear without a full page reload.

Many developers don’t like IFrames.  Search engines can get confused by them.  They may show scrollbars if your content doesn’t fit correctly.  Worse if your user has scrolled to the bottom of a page and then you load a new shorter page in the same frame they may be off the bottom of it.  Because of restrictions in HTML IFrames can usually only issue requests to the same site as the main site of the page they are on.  All of this means people have looked for better solutions.

Script Injection

Another approach to refreshing part of a web page is client-side script injection.  This takes advantage of the fact that Javascript code in a web page can be retrieved from a server using a URL via a src tag.

The basic approach is the same as for IFrames: we can set or reset the src of the script code, and the browser will retrieve the script from the URL and execute it.  If we send back valid Javascript that updates part of our web page, or calls a function that does, then we don’t have to refresh the entire page.

One advantage of this approach is that script tags can issue requests to any URL, not just the same site as the page they are on.  One disadvantage of this approach is that it can lead to security vulnerabilities in the code.

JSONP

JSONP is just a way of using client-side script injection across domains to get data from a different website: we request the script from the different server and it returns it as the parameters of a Javascript function which immediately executes and uses the payload.

2.  Refreshing Part of a Web Page Asynchronously

Ajax (Asynchronous Javascript and XML) is probably the primary technology for this.  Ajax is actually a label applied to a way of using many of the technologies described above to allow web pages to be displayed and then be updated asynchronously without reloading the entire page.

The main distinguishing feature of Ajax is that it uses a relatively new request response mechanism.  This is called XMLHttpRequest.  When a browser makes a request using XMLHttpRequest it provides the name of a Javascript function that will be called back by the server.  This function will have access to the data sent back from the server.

The original call to the server will not block and will not reload the page.  The user can carry on interacting with the page as usual, even if the call to the server takes some time.

It is up to the callback function to make whatever changes it needs to make to the web page using the usual Javascript techniques described above.  This will typically involve updating just a part of the screen.

One thing to note here is that the data returned is just text.  It doesn’t have to be XML, in spite of the names (XMLHttpRequest, AjaX).

3.  Updating a Page Automatically when Data Changes on the Server

Ajax as described so far updates a page in place, but only in response to a request from the web page.  This means that the user has to click a button or something similar for the page to update.

Obviously there are situations where data is changing and we would like it to update on our web page without the need for the user to manually refresh.

There are quite a few ways of doing this, some of them direct extensions to the Ajax model described above:

Polling

Javascript allows us to fire events that run code in the browser at set intervals.  So a very simple approach is to automatically request a refresh of the part of the screen we are interested in updating periodically.  We can do this using the Ajax techniques above, so that the rest of the screen remains responsive.

The drawback to this is we may make requests when no data has changed, putting unnecessary load on our servers.  Also our data on the client may well be out of date at any given time if we are between polling requests.

We really want a way for our server to send data only when it’s changed, and at the moment it has changed.

Long Polling

Another approach is long polling.  Here the browser fires off a request with a long timeout and sets up a handler for the results using Ajax as before.  However, the server itself doesn’t respond until it has data that has changed, and then it sends the data in response to the original request.  The browser handles the request and immediately sets up another long timeout request for future updates.

The disadvantage of this approach is that the server has to keep the connection (a network socket) open until it has data.  In general it will have as many connections as it has clients waiting for data.  This obviously puts load on the server and the number of sockets that the server can possibly use becomes a limiting factor.  Also this is clearly a more complex solution to implement than normal (short) polling.

Streaming

In streaming the client makes a request and the server responds with an open response that keeps the communication channel open and allows subsequent responses to be sent to the client.  The server may eventually time out the connection, or may keep it open indefinitely.  If the connection times out the client will have to make another request to refresh the data.  So this approach is like long polling with the client needing to make fewer requests.

One drawback of this approach is that many proxy servers buffer http responses until they are complete: that is, they won’t send on the message until they have the completion.  This means the client won’t get timely updates.  Another obvious drawback is that this is a fairly complex way of keeping data up to date.

With all of these approaches the call backs from the server tend to tie up one http communication channel.  As a result many approaches to solving the problem use (at least) two channels: one for polling or streaming to  update the data in place, and one for regular requests from the client to the server.

A number of commercial frameworks have been built using these techniques.

Comet

Comet is a name that’s been applied to the techniques described above to update a web page in place automatically when data changes on the server using a longlasting HTTP connection.

HTML 5 Web Sockets

HTML 5 web sockets are the new way to do bidirectional communication between a web page and a server.  They don’t use the old HTTP request/response at all, but instead set up one dedicated channel for communication between client and server.  This is fast, and the messages involve very little redundant header information, unlike conventional HTTP requests.

The main drawback of this new technology currently is that many browsers do not support it.  For example, it doesn’t work in the last version of Internet Explorer, IE9, although it works in IE10.

References

http://aspalliance.com/1391_Four_Ways_to_Locally_Update_a_Web_Page.8

http://www.xfront.com/REST-Web-Services.html

http://www.websocket.org/quantum.html

http://www.w3schools.com/jsref/tryit.asp?filename=tryjsref_button_disabled

A Beginner’s Guide To Credit Default Swaps (Part 4)

Introduction

This post continues the discussion of changes in the credit default swap (CDS) since 2007.  Part 2 and part 3 of this series of articles discussed changes in the mechanics of CDS trading.  This part will discuss changes around how credit events are handled, and future changes in the market.

Changes in the CDS Market re Credit Events Since 2007

  • Determination committees (DCs) have been set up to work out if a credit event has occurred, and to oversee various aspects of dealing with a credit event for the market.  A ‘determination committee’ is simply a group of CDS traders of various kinds, although overseen by ISDA (the standards body). The parties to one of the new standard contracts agree to be bound by the committee’s decisions.
  • Auctions are now conducted to determine the price to cash-settle credit default swaps when there is a credit event.  For this we need to determine the current price of the bonds in default.  To do this we get a group of dealers to quote prices at which they are prepared to trade the bonds (and may have to), and then calculate the price via an averaging process.  This can get quite complicated.  The determination committees oversee these auctions.
  • Classes of events that lead to credit events have been simplified.  In particular whether ‘restructuring’ is a credit event has been standardized (although the standards are different in North America, Asia and Europe).  ‘Restructuring’ means such things as changing the maturity of a bond, or changing its currency.
  • There is now a ‘lookback period’ for credit events regardless of when a CDS is traded.  What this means is that credit events that have happened in the past 60 days (only) can trigger a contract payout.  This simplifies things because the same CDS traded on different days is now treated identically in this regard.

Terminology and a Little History

The changes described so far in this article were introduced in 2009.  For North America, which went first, this was known as ‘CDS Big Bang’.  The standard contract terms thus introduced were known as the ‘Standard North American CDS Contract’ or ‘SNAC’ (pronounced ‘snack’).  The later changes in Europe were known as the ‘CDS Small Bang’The final standardization of Asian contracts occurred later still.

Much more detail on all of this can be found on the links to the excellent MarkIt papers above.

Future Changes

Further standardization in the credit default swap market will occur as a result of the Dodd-Frank Act in the USA. This mandates that standard swaps (such as standard CDS) be traded through a ‘swap execution facility’ (SEF). It further mandates that any such trades be cleared through a central clearing house.  Europe is likely to impose a similar regulatory regime, but is behind the United States.  More detail on SEFs and clearing houses is below.

The primary aims of these changes are:

1/ Greater transparency of trading. Currently many swaps are traded over-the-counter with no disclosure other than between the two counterparties. This makes it different to assess the size of the market, or the effects of a default.

2/ Reduced risk in the market overall from the bankruptcy of one participant.

The exact details of these changes are still being worked on by the regulators.

Swap Execution Facilities (SEFs)

At the time of writing it’s not even clear exactly what a ‘SEF’ is.  The Act defines a SEF as a “facility, trading system or platform in which multiple participants have the ability to execute or trade Swaps by accepting bids and offers made by other participants that are open to multiple participants”. That is, a SEF is a place where any participant can see and trade on current prices. There are some additional requirements of SEFs relating to providing public data relating to price and volume, and preventing market abuses.

In many ways a SEF will be very similar to an existing exchange. As mentioned the exact details are still being worked on.

A number of the existing electronic platforms for the trading of CDS are likely to become SEFs.

Clearing Houses

Central clearing houses are another mechanism for reducing risk in a market.

When a trade is done both parties to the trade can agree that it will be cleared through a clearing house.  This means that the clearing house becomes the counterparty to both sides of the trade: rather than bank A buying from bank B, bank A buys from the clearing house, and bank B sells to the clearing house.

Obviously the clearing house has no risk from the trades themselves.  The clearing house is exposed to the risk that either bank A or bank B goes bankrupt and thus can’t pay its obligations from the trade.  To mitigate this the clearing house will demand cash or other assets from both banks A and B.  This is known as ‘margin’.

The advantage of this arrangement is that the clearing house can guarantee that bank A will be unaffected even if bank B goes bankrupt.  The only counterparty risk for bank A is that the clearing house itself goes bankrupt.  This is unlikely since the clearing house will have no market risk, be well capitalized, and demands margin for all transactions.

Clearing houses and exchanges are often linked (and may be the same entity), but they are distinct concepts: the exchange is the place where you go to get prices and trade, the clearing house deals with the settlement of the trade. Usually clearing houses only have a restricted number of ‘members’ who are allowed to clear trades. Anyone else wanting clearing services has to get them indirectly through one of these members.

At the time of writing there are already a few central clearing houses for credit default swaps in operation, and more are on the way.

Conclusion

Since 2007 contracts for credit default swaps have been standardized.  This has simplified the way in which the market works overall: it’s reduced the scope for difficulties when a credit event happens, simplified the processing of premium payments, and allowed similar CDS contracts to be netted together more easily.  At the same time it has made understanding the mechanics of the market more difficult.

Further changes are in the pipeline for the CDS market to use ‘swap execution facilities’ and clearing houses.

A Beginner’s Guide to Credit Default Swaps (Part 3)

Introduction

Part 1 of this series of articles described the basic mechanics of a credit default swap.

Part 2 started to describe some of the changes in the market since part 1 was written.  This part will continue that description by describing the upfront fee that is now paid on a standard CDS contract, and the impact of the changes on how CDS are quoted in the market.

Standard Premiums mean there is a Fee

Part 1 discussed how CDS contracts have been standardized.  One of the ways in which they have been standardized is that there are now standard premiums.

Now consider the case where I buy protection on a five-year CDS.  I enter into a standard contract with a premium of 500 basis points (5%).  It may be that the premium I would have paid under the old nonstandard contract for the same dates and terms would have been 450 basis points.  However, now I’m paying 500 basis points.

Clearly I need to be compensated for the 50 bps difference or I won’t want to enter into the trade under the new terms.

As a result an upfront fee is paid to me when the contract is started.  This represents the 50 basis points difference over the life of the trade, so that I am paying the same amount overall as under the old contract.

Note that in this case I (the protection buyer) am receiving the payment, but it could easily be that I pay this upfront fee (if, for example, the nonstandard contract would have traded at 550 bps).

Upfront Fee Calculation

The calculation of the fee from the ‘old’ premium (spread) is not trivial.  It takes into account discounting, and also the possibility that the reference entity will default, which would mean the premium would not be paid for the full life of the trade.  However, this calculation too has been standardized by the contracts body (ISDA).  There is a standard model that does it for us.

The Full First Coupon means there is a Fee

In the example in part 1 I discussed how I might pay for a full three months protection at the first premium payment date for a CDS trade, even though I hadn’t had protection for three months.

Once again I need compensation for this or I will prefer to enter into the old contract.  So once again there is a fee paid to me when I enter into the trade.

This is known as an ‘accrual payment’ because of the similarity to accrued interest payment for bonds.  Here the calculation is simple: it’s the premium rate applied to the face value of the trade for the period from the last premium payment date to the trade date.

That is, it’s the amount I’ll be paying for protection that I haven’t received as part of the first premium payment.  Note no discounting is applied to this.

Upfront Fee/Accrual Payment

So in summary the new contract standardization means that a payment is now always made when a standard CDS contract is traded.

Part of the payment is the upfront fee that compensates for the difference between the standard premium (100 or 500 bps in North America) and the actual premium for the trade.  This can be in either direction (payment from protection buyer to seller or vice versa).  Part of the payment is the accrual payment made to the protection buyer to compensate them for the fact that they have to make a full first coupon payment.

How CDS are Quoted in the Market

Prior to these changes CDS were traded by simply quoting the premium that would be paid throughout the life of the trade.
With the contract standardization clearly the premium paid through the life of the trade will not vary with market conditions (it will always be 100 or 500 bps in North America, for example), so quoting it makes little sense.

Instead the dealers will quote one of:

a) Points Upfront
‘Points upfront’ or just ‘points’ refer to the upfront fee as a percentage of the notional.  For example, a CDS might be quoted as 3 ‘points upfront’ to buy protection.  This means the upfront fee (excluding the accrual payment) is 3% of the notional.  ‘Points upfront’ have a sign: if the points are quoted as a negative then the protection buyer is paid the upfront fee by the protection seller.  If the points are positive it’s the other way around.

b)  Price
With price we quote ‘like a bond’. We take price away from 100 to get points:
That is, points = 100 – price.  So in the example above where a CDS is quoted as 3 points to buy protection, the price will be 97.   The protection buyer still pays the 3% as an upfront fee of course.

c)  Spread
Dealers are so used to quoting spread that they have carried on doing so in some markets, even for standard contracts that pay a standard premium.  That is they still quote the periodic premium amount you would have been paying if you had bought prior to the standardization.  As already mentioned, there is a standard model for turning this number into the upfront fee that actually needs to be paid.

Conclusion

This part concludes the discussion of the changes in the mechanics of CDS trading since 2007.  As you can see, in many ways the standardization of the CDS market has actually made it more complicated.  The things to remember are that premiums, premium and maturity dates, and the amounts paid at premium dates have all been standardized in a standard contract.  This has meant there is an upfront fee for all standard CDS, and that they are quoted differently in the market from before.  It has also meant that CDS positions can be more easily netted against each other, and that the mechanics of calculating and settling premiums have been simplified.

Part 4 of this series will examine some of the other changes since 2007, and changes that are coming.

A Beginner’s Guide to Credit Default Swaps (Part 2)

Introduction

Part 1 of the ‘Beginner’s Guide to Credit Default Swaps’ was written in 2007. Since that time we have seen what many are calling the greatest financial crisis since the Great Depression, and a global recession.

Rightly or wrongly, some of the blame for the crisis has been attributed to credit derivatives and speculation in them.  This has led to calls for a more transparent and better regulated credit default swap (CDS) market. Furthermore the CDS market has grown very quickly, and by 2009 it had become clear that some simple changes to operational procedures would benefit everyone.

As a result many changes in the market have already been implemented, and more are on the way. This article will discuss these changes.  It will focus primarily on how the mechanics of trading a credit default swap have changed, rather than the history of how we got here or why these changes have been made. I’ll also briefly discuss the further changes that are on the way.

Overview of the Changes

The first thing to note is that nothing has fundamentally changed from the description of a credit default swap in part 1. A credit default swap is still a contract that provides a kind of insurance against a company defaulting on its bonds. If you have read and understood part one then you should understand how a credit default swap works.

The main change that has happened is that credit default swap contracts have been standardized. This standardization falls into three broad categories:

  1. Changes to the premium, premium and maturity dates, and premium payments that simplify the mechanics of CDS trading.
  2. Changes to the processes around identifying whether a credit event has occurred.
  3. Changes to the processes around what happens when a credit event has occurred.

Items 2 and 3 are extremely important, and have removed many of the problems that were discussed in part 1 relating to credit events. However, they don’t affect the way credit default swaps are traded as fundamentally as item 1, and are arguably more boring, so we’ll start with item 1.

The Non-Standard Nature of Credit Default Swaps Previously

If I buy 100 IBM shares and then buy 100 more I know that I have a position of 200 IBM shares.  I can go to a broker and sell 200 IBM shares to get rid of (close out) this position.

One of the problems with credit default swaps (CDS) as described in part 1 of this series of articles is that you couldn’t do this.  Every CDS trade was different, and it was consequently difficult to close out positions.

Using the description in part 1, consider the case where I have some senior IBM bonds.  I have bought protection against IBM default using a five year CDS.  Now I decide to sell the bonds and want to close out my CDS.  It’s difficult to do this by selling a five year CDS as described previously.  Even if I can get the bonds being covered, the definition of default, the maturity date and all the premium payment dates to match exactly it’s likely that the premiums to be paid will be different from those on the original CDS.  This means a calculation has to be done for both trades separately at each premium payment date.

Standardization

To address this issue a standard contract has been introduced that has:

1.  Standard Maturity Dates

There are four dates per year, the ‘IMM dates’ that can be the maturity date of a standard contract: 20th March, 20th June, 20th September, and 20th December.  This means that if today is 5th July 2011 and I want to trade a standard five-year CDS I will normally enter into a contract that ends 20th September 2016.  It won’t be a standard CDS if I insist my maturity date has to be 5th July 2016.

2.  Standard Premium Payment Dates

The same four dates per year are the dates on which premiums are paid (and none other).  As a result three months of premium are paid at every premium payment date.

Note that the use of IMM dates for CDS maturity and premium payment dates was already common when I wrote part 1 of the article.

3.  Standard Premiums

In North America, standard contracts ONLY have premiums of 100 or 500 basis points per annum (1% or 5%).  In Europe, Asia and elsewhere a wider range of premiums is traded on standard contracts, although this is still restricted.  How this works in practice will be explained in part 3.

4.  Payment of Full First Coupon

Standard contracts pay a ‘full first coupon’.  What this means is that if I buy a CDS midway between the standard premium payment dates I still have to pay a full three months’ worth of premium at the next premium date.  Note that ‘coupon’ here means ‘premium payment’.

For example, if I enter into a CDS with face value $100m on 5th July 2011 with a premium of 5% I will have to pay 3 months x 5% x 100m on the 20th September.  This is in spite of the fact that I have not been protected against default for the full three months.

Note that for the standard premiums and the payment of full first coupon to work we now have upfront fees for CDS.  Again this will be explained in more detail in part 3.

Impact of these Changes

What all this means is that we have fewer contract variations in the market.  The last item in particular means that a position in any given contract always pays the same amount at every premium date: we don’t need to make any adjustments for when the contract was traded.

In fact, in terms of the amount paid EVERY contract with the same premium (e.g. 500 bps) pays the same percentage of face value at a premium date, regardless of reference entity.  This clearly simplifies coupon processing.  It also allows us to more easily net positions in credit default swaps in our systems.

Conclusion

One of the major changes in the CDS market since part 1 was written is that contracts have been largely standardized.  More detail on this and other changes will be given in part 3.

Table of Contents: Introduction to using Financial Products Markup Language (FpML) with Microsoft .NET Tools

Part 1 Introduction

An overview of how we can use Visual Studio to examine the FpML XSDs. Shows how to create a Visual Studio project containing the FpML schemas, and how to use that to navigate through them. Also shows how to validate the XML examples that are provided in the FpML download, both using Visual Studio and in C# code.

Part 2 How FpML Validates A Basic Trade Document

Looks at the structure of an FpML trade document in some detail. Explains how the FpML schemas fit together to validate such a document, and examines the use of base and extension types, and of substitution groups, in the XSDs.

Part 3 Generating .Net Classes From The FpML XSD Schema

Shows how to generate .Net classes in C# from the XML schema documents using Xsd.exe.  Explains how to load data into these classes from an FpML document using C# code, and to save the document back out again from the classes.

Part 4 Problems With Using Xsd.exe To Generate .Net Classes From The FpML XSD Schema

Goes through some of the problems with the C# code generated in part 3, and discusses how to fix them.

Problems with Using xsd.exe to Generate .NET Classes from the FpML XSD Schema (Introduction to using FpML with .NET Tools Part 4)

Introduction

Part 3 of this series of articles showed how we can generate .NET classes from the FpML XSD schema using xsd.exe. It showed how we can then use standard .NET serialization syntax to populate the classes from FpML documents, and vice versa.

However, as mentioned in part 3, xsd.exe generates buggy code when used with the FpML XSDs. This article will go through some of the problems with this code and describe how to fix them.

Corrected .NET Classes

Corrected generated classes, which are the end result of the work in this article, are available. This code has been corrected such that it appears to work in most circumstances. However we cannot be certain that it is free of all bugs.

The corrected code is based on FpML 4.2. The main aim of this article is to give you a starting point if you are trying to generate .NET code from other versions of the specification.

Please note also that this article was written using the xsd.exe supplied with Visual Studio 2005 Service Pack 1. Different versions of Visual Studio, or Visual Studio 2005 without the service pack applied, may give different results. See the discussion in the Comments for this article for more details.

Generating the Code

Part 3 of this series of articles described how to use xsd.exe to generate C# code from the FpML schemas. This gave us a file called fpml-main-4-2_xmldsig-core-schema.cs, which contains a Document class which should be the root class for our serialization. Unfortunately if we attempt to create an XmlSerializer object using this code we get exceptions.

Problem 1: Substitution Groups and Extension

The most fundamental problem with our generated code is that xsd.exe has got confused about the substitution groups and associated extension that were discussed in some detail in part 2 of this series of articles.

In particular it has not decorated the Product property of our Trade class correctly to allow it to deal with all the possible products correctly.

In part 2 we saw that the product element in the FpML is replaced with various individual product types using the substitutionGroup syntax. The product element is a sub-element of the trade element in the XML. As a result in our C# code we have a Trade class which contains a data member of type Product and a public property that gets and sets this. Excerpts from our generated code are as below:

[System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.42")]
[System.SerializableAttribute()]
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.ComponentModel.DesignerCategoryAttribute("code")]
[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://www.fpml.org/2005/FpML-4-2")]
public partial class Trade {
 
    private Product productField;
 
    public Product product {
        get {
            return this.productField;
        }
        set {
            this.productField = value;
        }
    }

The problem with this code is that, as we saw in part 2, we need to be able to put a Swap object (for example) in to our Product data member. We have a Swap class in our code, and this inherits Product, as we’d expect. So we can store a Swap object in our Product data member.

However, .NET doesn’t know how to serialize or deserialize a swap element in the XML into our product property without being told. In particular, if the deserializer finds an element called ‘swap’ where it’s expecting ‘product’ it doesn’t know that it should deserialize into a Swap object. As we saw in part 2, in <a href="https://www.dropbox.com/s/vrm3rebp99vruot/ird_ex01_vanilla_swap.xml?dl=0 there is a ‘swap’ element where you might expect a ‘product’ element.

This is very easily fixed in this case. We simply decorate the Product property with an XmlElementAttribute that tells the serializer what to do:

    [System.Xml.Serialization.XmlElementAttribute("swap", typeof(Swap))]
    public Product product {
        get {
            return this.productField;
        }
        set {
            this.productField = value;
        }
    }

This does solve the problem for swap. However, there are multiple other product types that need the XmlElementAttribute added to Product to get the serialization to work (there are over twenty in fact). Also it turns out that this problem isn’t limited to Product. There are other elements that use the substitution group and extension property in this way and have the same problem

At first glance it appears there is no easy solution to this: we are going to have to go through all the possible places where there might be errors and correct the code by hand. However, there’s a much easier solution.

Problem 1 Solution

For simple examples of this problem xsd.exe will correctly generate the required XmlElementAttribute. It’s not immediately obvious why it fails to do so with the full FpML schemas.

It turns out that the reason xsd.exe gets this wrong is because the FpML schemas are spread across multiple files. If we create one big xsd file containing all of the FpML schema files and then run xsd.exe on this the problem goes away. I’ll leave you to draw your own conclusions about the quality of the xsd.exe code.

So to fix the problem we can cut and paste all of the FpML xsd files into one file, removing the include statements that become redundant. An example of this is available. We then use xsd.exe as described in part 3 to create our C# classes.

Problem 2: RoutingIds

If we fix problem 1 as above, we still get an error when we try to create our XmlSerializer object. The error message says ‘Connot convert type ‘RoutingId[]’ to ‘RoutingId’’.

This exception arises because xsd.exe has got an XmlArrayItemAttribute wrong in class RoutingIdsAndExplicitDetails. The generated code for the routingIds property in this class is as below:

    [System.Xml.Serialization.XmlArrayItemAttribute("routingId", typeof(RoutingId), IsNullable=false)]
    public RoutingId[][] routingIds {
        get {
            return this.routingIdsField;
        }
        set {
            this.routingIdsField = value;
        }
    }

The XmlArrayItemAttribute says that the property relates to an array of type RoutingId. However, the property (correctly) is of type RoutingId[][] which is an array of arrays of type RoutingId. So the XmlArrayItemAttribute should be changed as below:

    [System.Xml.Serialization.XmlArrayItemAttribute("routingId", typeof(RoutingId[]), IsNullable=false)]
    public RoutingId[][] routingIds {
        get {
            return this.routingIdsField;
        }
        set {
            this.routingIdsField = value;
        }
    }

Testing the Generated Classes

The two fixes above appear to make the generated classes work correctly. With these changes we can deserialize our ird_ex01_vanilla_swap.xml FpML document into the classes. We can then serialize it back into XML, and we end up with the same document we started. We saw this in one of our code examples from part 3. It’s not easy to test this code is working in all cases however. One approach is to take all the sample FpML files provided with the FpML download and attempt to deserialize them and reserialize them. The code is working if the final document is the same as the original one.

Testing Program

A testing program that does this is available. This contains the code we have seen before (in part 3) for serialization and deserialization.

It also contains a basic class for comparing the original and final FpML documents and outputting any differences. It does this simply by iterating through the lines in the two files and comparing them. This may not be the best way to do this as it is quite difficult with XML.

It is difficult because there can be valid differences between the files that are hard to deal with. For example we have a ‘difference’ where the original line is:

<hourMinuteTime>09:00:00</hourMinuteTime>

After deserialization and reserialization this becomes:

<hourMinuteTime>09:00:00.0000000+00:00</hourMinuteTime>

These are clearly the same thing but our file comparer has to be able to deal with it. It does so in a very basic way by hard-coding such differences to be ignored in method ‘CompareLines’.

Extent of Testing of Generated Code

Because of the difficulties described above the testing program has only been used to test that the generated code works with the interest rate derivatives and credit derivatives sample files.

Usefulness of Generated Classes

In the last two articles we have demonstrated that we can generate C# classes based on the FpML XSD specification, and with a little work can deserialize FpML documents into this object model, manipulate the objects, and serialize back into FpML.

However, my personal opinion is that we need to think carefully as to whether we want to use these classes. FpML has a very hierarchical structure, and as a result in the generated code we have very many classes interacting to represent even a simple trade. Our object model is not very easy to understand or use as a result.

For example, suppose we want to change the notional on the fixed leg of our interest rate swap (ird_ex01_vanilla_swap.xml) once we have it in the object model. Starting with the top-level Document object that we have deserialized, the syntax is as below:

        private void ChangeNotional(Document document)
        {
            DataDocument dataDocument = (DataDocument)document;
            Trade trade = (Trade)dataDocument.Items[0];
            Swap swap = (Swap)trade.Item;
            InterestRateStream interestRateStream0 = swap.swapStream[0];
            InterestRateStream interestRateStream1 = swap.swapStream[1];
            Calculation calculation1 = (Calculation)interestRateStream1.calculationPeriodAmount.Item;
            Notional notional = (Notional)calculation1.Item;
            notional.notionalStepSchedule.initialValue = 1000000;
        }

In fact, I don’t think this routine is complex enough, since it should really check which of interestRateStream0 and interestRateStream1 is the fixed leg.

A code example incorporating this code is available.

It’s hard to argue that this code is straightforward: for instance we have a number of ‘Item’ properties referenced that can be of various types. We have to know which type we want. In addition we have the issue that we are not entirely sure that the code generated by xsd.exe is free of bugs, even after the work we have done to patch it up.

As a result in a current project I am working on we have decided not to use the classes generated by xsd.exe, but instead to deserialize into flatter structures of our own design.

Conclusion

This article has shown that it is possible to fix the code generated by xsd.exe from the FpML schemas such that we can deserialize/serialize FpML documents into/out of the object model. We have also shown that it is difficult to test that this will work correctly in all cases, and that the resulting object model is not all that easy to use.

Licensing of FpML Specifications

The FpML Specifications of this document are subject to the FpML Public License (the “License”); you may not use the FpML Specifications except in compliance with the License. You may obtain a copy of the License at http://www.FpML.org.
The FpML Specifications distributed under the License are distributed on an “AS IS” basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License.
The Licensor of the FpML Specifications is the International Swaps and Derivatives Association, Inc. All Rights Reserved.

http://www.fpml.org/documents/license.html

Generating .NET Classes from the FpML XSD Schema (Introduction to using FpML with .NET Tools Part 3)

Introduction

Part one of this series of articles looked at how we can use Visual Studio to examine the FpML XML schema documents (XSDs), and the associated example XML instance documents.

Part two of the series looked in some detail at the structure of an FpML trade XML document, and showed how to investigate its validation against the schemas using Visual Studio.

This article will show how to generate .NET classes in C# from the XML schema documents. These classes can in theory be used to load FpML documents into objects. These objects can then be manipulated by the code and saved out as FpML. However, there are some problems with the Microsoft tools for doing this, as we shall see.

Motivation

FpML is a powerful way of representing financial products in a standard way. It can be used to pass trade data between financial institutions in an XML format that both parties can understand. It can also be used to pass trade data between computer systems within a financial institution. If all systems know about FpML then we have a standard platform-independent human-readable representation of our data that everyone can use.

Clearly there are some disadvantages to using FpML in this way (it’s very verbose, and we will probably need to modify it for our own needs anyway, in which case it stops being standard). However there’s a prevailing view that FpML is a good place to start when defining what trade data and messages we will pass around in a service-oriented architecture in a bank.

However if we want to use trade data passed to us as FpML we need to get it into a format we can program against. Obviously if we are programming in .NET languages we want to have objects, or even DataSets. Furthermore we’d like to be able to modify those objects or DataSets and then turn them back into valid FpML documents.

The XML Schema Definition Tool (xsd.exe)

Microsoft have provided an ‘XML Schema Definition Tool’ called xsd.exe that purports to allow us to do this. It claims to be able to turn XSDs into classes. These classes can then be used to automatically load associated XML instance documents into objects, and back into XML again after manipulation. xsd.exe is also capable of creating DataSets from XSD files.

Obviously loading XML instance documents into objects is XML deserialization, and turning them back into XML is serialization. Once we have the classes we can use .NET’s standard serialization mechanisms for this.

As we shall see, this doesn’t work particularly well with the FpML schemas.

Generating C# Classes from FpML Schemas

To use xsd.exe to generate C# classes is relatively straightforward:

  1. 1. Start a Visual Studio command prompt (this is under ‘Microsoft Visual Studio/Visual Studio Tools’ on your start menu).
  2. Navigate to a folder where the FpML XSD files are. If you created the console application used in parts 1 and 2 of this series of articles navigate to the project folder (which has the XSDs in it).
  3. Run the command below:
    xsd /c fpml-main-4-2.xsd xmldsig-core-schema.xsd
    This generates a file called fpml-main-4-2_xmldsig-core-schema.cs which contains the classes we need.

Note that the /c parameter asks xsd to generate classes. There is also a /d parameter that asks xsd to generate DataSets. We will discuss this option later.

Note also that we only need reference the root schema file (fpml-main-4-2.xsd) for this to work: the other schema files are referenced from this file (with include statements) and xsd can work this out. However xsd can’t work out what to do with the xmldsig-core-schema.xsd file unless we tell it to process it. This is because only the namespace is referenced in the schema files, not the file itself.

Using the Generated C# Classes

If we look at the fpml-main-4-2_xmldsig-core-schema.cs file we see that we have nearly 37,000 lines of code, including over 650 classes. As you’d expect these classes use .NET’s XML serialization attributes throughout, so we can serialize into and deserialize from XML correctly. The root class is Document.

To use these classes we need to create an XmlSerializer object based on the root Document in code. This is standard .NET serialization code:

        XmlSerializer xmlSerializer = new XmlSerializer(typeof(Document));

Then in theory we should be able to deserialize any FpML document into these classes using the XmlSerializer. The syntax we’d use for this is as below:

        internal Document DeserializeXMLToDocument(FileInfo inputXMLFile)
        {
            using (FileStream fileStream = File.OpenRead(inputXMLFile.FullName))
            {
                return (Document)xmlSerializer.Deserialize(fileStream);
            }
        }

Once we’ve deserialized into objects based on our classes, we should be able to serialize those back into XML. Clearly the final XML should be the same as the initial XML. The syntax for the serialization is as below:

        internal void SerializeDocumentToXML(Document document, FileInfo outputXMLFile)
        {
            using (FileStream outFileStream = new FileStream(outputXMLFile.FullName, FileMode.Create))
            {
                xmlSerializer.Serialize(outFileStream, document);
            }
        }

This is all standard .NET XML serialization code.

First Attempt to Use the Generated Classes

We can write a basic harness that uses our generated classes and the code above to attempt to deserialize and serialize FpML files.

A version of this is available. It tries to deserialize the ird_ex01_vanilla_swap.xml that was examined in part 2 of this series of articles.

Unfortunately the classes generated by xsd.exe have a number of problems, and unless we correct these the basic harness will not work. In fact we can’t even create the XmlSerializer object successfully with the generated code.

Part four of this series of articles will examine the various problems with the code that xsd.exe has generated, and will discuss how to correct them.

Corrected Generated Classes and a Working Harness

Corrected generated classes are available. As will be discussed in part four this code has been corrected such that it appears to work in most circumstances. However we cannot be certain that it is free of all bugs.

A version of the harness that uses the corrected code is also available. As you can see if you run it, this does correctly deserialize and then reserialize ird_ex01_vanilla_swap.xml.

Conclusion

This article has shown that in theory we can generate classes from the FpML XSDs using xsd.exe. We should then be able to deserialize FpML documents into these classes, manipulate the resulting objects, and then reserialize back into valid FpML documents. However, xsd.exe has some problems that prevent this from working correctly.

Part four of this series of articles will look in more detail at the problems in the generated code, and how to fix them.

Licensing of FpML Specifications

The FpML Specifications of this document are subject to the FpML Public License (the “License”); you may not use the FpML Specifications except in compliance with the License. You may obtain a copy of the License at http://www.FpML.org.
The FpML Specifications distributed under the License are distributed on an “AS IS” basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and limitations under the License.
The Licensor of the FpML Specifications is the International Swaps and Derivatives Association, Inc. All Rights Reserved.

http://www.fpml.org/documents/license.html