Review of a Trading System Project

Introduction

In late 2007 I wrote a series of articles on Microsoft’s Composite Application Block (CAB).  At that time I was running a team that was developing a user interface framework that used the CAB.

We’re now four years on and that framework is widely used throughout our department.  There are currently modules from eleven different development teams in production.  There are modules that do trade booking, trade management, risk management, including real-time risk management, curve marking, other market data management, and so on.  All of those were written by different teams, yet it appears to the user that this is one application.

This article will look back at the goals, design decisions, and implementation history of the project.  It will look at what we did right, what we did wrong, and some of the limitations of the CAB itself (which apply equally to its successor, Prism).

The framework is a success.  However, it’s only a qualified success.  Ironically, as we shall see, it is only a qualified success because it has been so successful.  To put that less cryptically: many of the problems with the framework have only arisen because it’s been so widely adopted.

Hopefully this article will be of interest.  It isn’t the kind of thing I usually write about and will of course be a personal view:  I’m not going to pretend I’m totally unbiased.

Design Goals

Original Overall Goals

The project had two very simple goals originally:

  1. A single client application that a user (in this case, a trader) would use for everything they need to do.
  2. Multiple development teams able to easily contribute to this application, working independently of each other.

I suspect these are the aims of most CAB or Prism projects.

Do You Actually Need a Single Client Application?

An obvious question arising from these goals is why you would need an application of this kind.

Historically there have tended to be two approaches to building big and complex trading applications:

  1. The IT department will create one huge monolithic application.  One large development team will build it all.
  2. The IT department breaks the problem up and assigns smaller development teams to develop separate applications to do each part.  This is a much more common approach than 1/.

Both of these approaches work, and both mean you don’t need a client application of the kind we are discussing.  However, neither of these approaches works very well:

  • Monolithic applications quickly become difficult to maintain and difficult to release without major regression testing.
  • Equally users don’t like having to log into many different applications.  This is particularly true if those applications are built by the same department but all behave in different ways.  It can also be difficult to make separate applications communicate with each other, or share data, in a sensible way.

So there definitely is a case for trying to create something that fulfils our original design goals above and avoids these problems.   Having said that it’s clearly more important to actually deliver the underlying functionality.  If it’s in several separate applications that matters less than failing to deliver it altogether.

More Detailed Goals

For our project we also had some more detailed goals:

  • Ease of use for the developer.  I have personally been compelled to use some very unpleasant user interface frameworks and was keen that this should not be another one of those.
  • A standardized look and feel.  The user should feel this was one application, not several applications glued together in one window.
  • Standard re-usable components, in particular a standard grid and standard user controls.  The user controls should include such things as typeahead counterparty lookups, book lookups, and security lookups based on the organization’s standard repositories for this data.  That is, they should include business functionality.
  • Simple security (authentication and authorization) based on corporate standards.
  • Simple configuration, including saving user settings and layouts.
  • Simple deployment.  This should include individual development teams being able to deploy independently of other teams.

As I’ll discuss, it was some of the things that we left off that list came to back to haunt us later on.

Goals re Serverside Communication

A further goal was use of our strategic architecture serverside, in particular for trade management.  For example, we wanted components that would construct and send messages to our servers in a standard way.  I won’t discuss the success or failure of this goal in detail here as it’s a long and chequered story, and not strictly relevant to the CAB and the user interface framework.

Technical Design

Technical Design: Technologies

The technologies we used to build this application were:

  • Microsoft C# and Windows Forms
  • Microsoft’s Patterns and Practices Group’s Composite Application Block (the CAB)
  • DevExpress’ component suite
  • Tibco EMS and Gemstone’s Gemfire for serverside communication and caching

As I’ve already discussed, this document is going to focus purely on the clientside development.

In 2007 these were logical choices for a project of this kind.  I’ll discuss some of the more detailed design decisions in the sections below.

Things We Did (Fairly) Well

As I said this is a personal view: I’m not sure all our developers would agree that all of this was done well.

Ease of Use

Designing for ease of use is, of course, quite difficult.  We have done a number of things to make the project easy to use, some of which I’ll expand on below.  These include:

  • Developers write vanilla user controls.  There’s no need to implement special interfaces, inherit from base classes or use any complex design pattern.
  • Almost all core functionality is accessed through simple services that the developer just gets hold of and calls.  So for example to show your user control you get an instance of the menu service and call a show method.  We used singleton service locators so the services could be accessed without resorting to CAB dependency injection.
  • Good documentation freely available on a wiki
  • A standard onboarding process for new teams, including setting up a template module.  This module has a ‘hello world’ screen that shows the use of the menus and other basic functionality.

Developers Not Forced to Learn the Composite Application Block (CAB)

As mentioned above, one of the key goals of the project was simplicity of use.  The CAB is far from simple to use: I wrote a 25 part introductory blog article on it and still hadn’t covered it all.

As a result we took the decision early on that developers would not be compelled to use the CAB actually within their modules.  We were keen that developers would not have to learn the intricacies of the CAB, and in particular would not have to use the CAB’s rather clunky dependency injection in their code.

However, obviously we were using the CAB in our core framework.  This made it difficult to isolate our developers from the CAB completely:

  • As mentioned above we exposed functionality to the developers through CAB services.  However we gave them a simple service locator so they didn’t have to know anything about the CAB to use these services.
  • We also used some CAB events that developers would need to sink.  However since this involves decorating a public method with an attribute we didn’t think this was too difficult.

As already mentioned, to facilitate this we wrote a ‘template’ module, and documentation on how to use it.  This was a very simple dummy module that showed how to do all the basics.  In particular it showed what code to write at startup (a couple of standard methods), how to get hold of a service, and how to set up a menu item and associated event.

Versioning

We realized after a few iterations of the system that we needed a reasonably sophisticated approach to versioning and loading of components.  As a result we wrote an assembly loader.  This:

  • Allows each module to keep its own assemblies in its own folder
  • Allows different modules to use different versions of the same assembly
  • Also allows different modules to explicitly share the same version of an assembly

Our default behaviour is that when loading an assembly that’s not in the root folder, the system checks all module folders for an assembly of that name and loads the latest version found.  This means teams can release interface assemblies without worrying about old versions in other folders.

Versioning of Core Components

For core components clearly there’s some code that has to be used by everyone (e.g. the shell form itself, and menus).  This has to be backwards compatible at each release because we don’t want everyone to have to release simultaneously.  We achieve this through the standard CAB pattern of interface assemblies: module teams only access core code through interfaces that can be extended, but not changed.

However, as mentioned above, the core team also writes control assemblies that aren’t backwards compatible: teams include them in their own module, and can upgrade whenever they want without affecting anyone else.

User Interface Design

For the user interface design, after a couple of iterations we settled on simple docking in the style of Visual Studio.  For this we used Weifen Luo’s excellent docking manager, and wrote a wrapper for it that turned it into a CAB workspace.  For menuing we used the ribbon bars in the DevExpress suite.

The use of docking again keeps things simple for our developers.  We have a menu service with a method to be called that just displays a vanilla user control in a docked (or floating) window.

Deployment

In large organizations it’s not uncommon for the standard client deployment mechanisms to involve complex processes and technology.  Our organization has this problem.  Early on in this project it was mandated that we would use the standard deployment mechanisms.

We tried hard to wrap our corporate process in a way that made deployment as simple as possible.  To some extent we have succeeded, although we are (inevitably) very far from a simple process.

Configuration

For configuration (eventually) we used another team’s code that wrapped our centralized configuration system to allow our developers to store configuration data.  This gives us hierarchies of data in a centralized database.  It means you can easily change a setting for all users, groups of users, or an individual user, and can do this without the need for a code release.

Module Interaction

Clientside component interaction is achieved by using the standard CAB mechanisms.  If one team wants to call another team’s code they simply have to get hold of a service in the same way as they do for the core code, and make a method call on an interface.  This works well, and is one advantage of using the CAB.  Of course the service interface has to be versioned and backwards compatible, but this isn’t difficult.

Security

For security we again wrapped our organization’s standard authentication and authorization systems so they could easily be used in our CAB application.  We extended the standard .Net Principal and Identity objects to allow authorization information to be directly accessed, and also allowed this information to be accessed via a security service.

One thing that we didn’t do so well here was the control of authorization permissions.  These have proliferated, and different teams have handled different aspects of this in different ways.  This was in spite of us setting up what we thought was a simple standard way of dealing with the issue.  The result of this is that it’s hard to understand the permissioning just by looking at our permissioning system.

Things We Didn’t Do So Well

As mentioned above, the things that didn’t go so well were largely the things we didn’t focus on in our original list of goals.

Most of these issues are about resource usage on the client.  This list is far from comprehensive: we do have other problems with what we’ve done, of course, but the issues highlighted here are the ones causing the most problems at the time of writing.

The problems included:

Threading

The Problem

We decided early on to allow each team to do threading in the way they thought was appropriate, and didn’t provide much guidance on threading.  This was a mistake, for a couple of reasons.

Threading and Exception Handling

The first problem we had with threading was the simple one of background threads throwing exceptions with no exception handler in place.  As I’m sure you know, this is pretty much guaranteed to crash the entire application messily (which in this case means bringing down 11 teams’ code).  Of course it’s easy to fix if you follow some simple guidelines whenever you spawn a background thread.  We have an exception hander that can be hooked up with one line of code and that can deal with appropriate logging and thread marshalling.  We put how to do this, and dire warnings about the consequences of not doing so, in our documentation, but to no avail.  In the end we had highly-paid core developers going through other teams’ code looking for anywhere they spawned a thread and then complaining to their managers if they hadn’t put handlers in.

Complex Threading Models

Several of our teams were used to writing serverside code with complex threading models.  They replicated these clientside, even though most of our traders don’t have anything better than a dual core machine, so any complex threading model in a workstation client is likely to be counterproductive.

Some of these models tend to throw occasional threading exceptions that are unreproducible and close to undebuggable.

What We Should Have Done

In retrospect we should have :

  • Provided some clear guidance for the use of threading in the client.
  • Written some simple threading wrappers and insisted the teams use them, horrible though that is.
  • Insisted that ANY use of threading be checked by the core team (i.e. a developer that knew about user interface threading).  The wrappers would have made it easy for us to check where threads were being spawned incorrectly (and without handlers).

Start Up

The Basic Problem

We have a problem with the startup of the system as well: it’s very slow.

Our standard startup code (in our template module) is very close to the standard SCSF code.  This allows teams to set up services and menu items when the entire application starts and the module is loaded.

This means the module teams have a hook that lets them run code at startup.  The intention here is that you instantiate a class or two, and it should take almost no time.  We didn’t think that teams would start using it to load their data, or start heartbeats, or worse, to fire off a bunch of background threads to load their data.  However, we have all of this in the system.

Of course the reason for this is that the place where this code should actually be is when a user clicks a menu item to load the team’s screen for the first time.  For heartbeats, it’s a little hard to control startup and closedown when a screen opens and closes: it’s much easier to just start your heartbeats when the application starts.  For data loading for a screen, if this is slow it becomes very obvious if it happens when a user requests a screen.

However, the impact of this happening over 11 development teams’ code is that the system is incredibly slow to start, and very very fragile at startup.  It will often spend a couple of minutes showing the splash screen and then keel over with an incomprehensible error message (or none).  As a result most traders keep the system open all the time (including overnight).  But an obvious consequence is that they are very reluctant to restart, even if they have a problem that we know a restart will fix.  Also all machines are rebooted at the weekend in our organization, so they have to sit through the application startup on a Monday morning in any case.

One further problem is that no individual team has any incentive to improve their startup speed: it’s just a big pool of slowness and you can’t tell if module X is much slower than module Y as a user.  If any one team moves to proper service creation at startup it won’t have a huge overall effect.  We have 11 teams and probably no one team contributes more than a couple of minutes to the overall startup.  It’s the cumulative effect that’s the problem.

What We Should Have Done

This is one area where we should just have policed what was going on better, and been very firm about what is and is not allowed to be run at startup.  At one stage I proposed fixing the problem by banning ANY module team’s code from running at startup, and I think if I were to build an application of this kind again then that’s what I’d do.  However, clearly a module has to be able to set up its menu items at startup (or the user won’t be able to run anything).  So we’d have to develop a way of doing this via config for this to work, which would be ugly.

One other thing that would really help would be the ability to restart an individual module without restarting the entire system.

Memory Usage

The Problem

We effectively have 11 applications running in the same process.  So with memory usage we have similar problems to the startup problems: every team uses as much memory as they think they need, but when you add it all up we can end up with instances of the system using well over 1GB of memory.  On a heavily-loaded trader’s machine this is a disaster: we’ve even had to get another machine for some traders just to run our application.

To be honest, this would be a problem for any complex trading environment.  If we had 11 separate applications doing the same things as ours the problem would probably be worse.

However, as above there’s no incentive for any individual team to address the problem: it’s just a big pool that everyone uses and no-one can see that module X is using 600MB.

What We Should Have Done

Again here better policing would have helped: we should have carefully checked every module’s memory requirements and told teams caching large amounts of data not to.  However, in the end this is a problem that is very hard to avoid: I don’t think many teams are caching huge amounts of data, it’s just there’s a lot of functionality in the client.

One thing that will help here is the move to 64-bit, which is finally happening in our organization.  All our traders have a ceiling of 4GB of memory at present (of which, as you know, over 1GB is used by Windows), so a 1GB application is a real problem.

Use of Other Dependency Injection Frameworks (Spring.Net)

The Problem

One unexpected effect of the decision not to compel teams to use the CAB was that a number of teams decided to use Spring.Net for dependency injection within their modules, rather than using the CAB dependency injection.  I have some sympathy with this decision, and we didn’t stop them.  However, Spring.Net isn’t well-designed for use in a framework of this kind and it did cause a number of problems.

  • The biggest of these is that Spring uses a number of process-wide singletons.  We had difficulties getting them to play nicely with our assembly loading.  This has resulted in everyone currently having to use the same (old) version of Spring.Net, and upgrading being a major exercise.
  • Handling application context across several modules written by different teams proved challenging.
  • If you use XML configuration in Spring.Net (which everyone does) then types in other assemblies are usually referenced using the simple assembly name only.  This invalidated some of our more ambitious assembly loading strategies.
  • The incomprehensibility associated with Spring.Net’s exception messages on initial configuration is made worse when you have multiple modules at startup.

We also had some similar problems re singletons and versioning with the clientside components of our caching technology.  Some code isn’t really compatible with single-process composite applications.

What We Should Have Done

Again we should have policed this better: many of the problems described above are solvable, or could at least have been mitigated by laying down some guidelines early on.

What I’d Change If I Did This Again

The ‘what we should have done’ sections above indicate some of the things I’d change if I am ever responsible for building another framework of this kind.  However, there are two more fundamental (and very different) areas that I would change:

Code Reviews

In the ‘what we should have done’ sections above I’ve frequently mentioned that we should have monitored what was happening in the application more carefully.  The reasons we didn’t were partially due to resourcing, but also to some extent philosophical.  Most of our development teams are of high quality, so we didn’t feel we needed to be carefully monitoring them and telling them what to do.

As you can see from the problems we’ve had, this was a mistake.  We should have identified the issues above early, and then reviewed all code going into production to ensure that there weren’t threading, startup, memory or any other issues.

Multiple Processes

The second thing I’d change is technical.  I now think it’s essential in a project of this kind to have some way of running clientside code in separate processes.  As we’ve seen many of the problems we’ve had have arisen because everything is running in the same process:

  • Exceptions can bring the process down, or poorly-written code can hang it
  • It’s hard to identify how much each module is contributing to memory usage or startup time
  • There’s no way of shutting down and unloading a misbehaving module

I think I’d ideally design a framework that had multiple message loops and gave each team its own process in which they could display their own user interface.  This is tricky, but not impossible to do well.

Note that I’d still write the application as a framework.  I’d make sure the separate processes could communicate with each other easily, and that data could be cached and shared between the processes.

As an aside a couple of alternatives to this are being explored in our organization at present.  The first is to simply break up the application into multiple simpler applications.  The problem with this is that it doesn’t really solve the memory usage or startup time problems, and in fact arguably makes them worse.  The second is to write a framework that has multiple processes but keeps the user interface for all development teams in the same process.  This is obviously easier to do technically than my suggestion above.  However for many of our modules it would require quite a bit of refactoring: we need to split our the user interface code cleanly and run it in a separate process to the rest of the module code.

Advertisements

Model-View-Presenter using the Smart Client Software Factory (Introduction To CAB/SCSF Part 25)

Introduction

Part 23 and part 24 of this series of articles described the Model-View-Presenter pattern.

This article explains how the Smart Client Software Factory supports this pattern by generating appropriate classes.

Guidance Automation Packages in the Smart Client Software Factory

We saw how we could use the Smart Client Application Guidance Automation Package to set up a Smart Client Application in part 18. We can also set up a Model-View-Presenter pattern in a Smart Client application using another of the Guidance Automation Packages.

This will only work in an existing Smart Client Application.

Running the Model-View-Presenter Package

To use the Guidance Automation Package we right-click in Solution Explorer on a project or folder where we want to run the package. It is intended that we do this in the Views folder in a business module. On the right-click menu we select ‘Smart Client Factory/Add View (with presenter)’. We get a configuration screen that lets us name our view, and also lets us put the classes that get created into a folder. For the purposes of this example we name our view ‘Test’, and check the checkbox that says we do want to create a folder for the view.

When we click ‘Finish’ we get three classes and a TestView folder as below:

mvpsolutionexplorer.jpg

Classes Created

  1. TestView
    This is (obviously) our View class. It is intended that this contain the auto-generated code to display the View. As discussed in the previous articles any complex view logic will not go into this class, but will go into the Presenter.
  2. TestViewPresenter
    This is our Presenter class. As discussed in previous articles this should contain logic to deal with user events. It should also contain any complex view logic, and should directly update the View with the results of an view logic calculations. It has access to the View class via an interface.
  3. ITestView
    This is the interface that the View implements. The Presenter can only update the View through this interface.

Diagram

In terms of the diagrams shown in parts 23 and 24 this looks as below. Remember that we may or may not have arrows between the Model and the View depending on whether we are using the active View or passive View version of Model-View-Presenter:

mvpdiagram2.jpg

Where’s the Model?

The Guidance Automation package does not set up a Model class for us. As we have seen, the Model has no direct references to a View/Presenter pair (it raises events), and there may be multiple View/Presenter pairs for one Model. Further the Model would not usually be in the same folder, or even in the same component, as our View and Presenter.

For these reasons we are expected to set up our Model classes separately by hand.

Note that the Presenter (and the View as well if we are using the active View pattern) will have a direct reference to the Model. We will have to add these references manually.

Active and Passive View: a Quick Recap

Remember that in Model-View-Presenter the Presenter updates the View via an interface. We can set this up so only the Presenter is allowed to update the View. This is the ‘passive View’ pattern. We can also set this up so that the Presenter can update the View in complex cases, but the View can also update itself (in response to an event or user request) in simple cases. This is the ‘active View’ pattern.

Active and Passive View: Which Should We Use?

The pattern described in the SCSF documentation is the passive View: the documentation implies that all updates to the View should be done by the Presenter.

However there is nothing to stop us using the active View pattern with the classes generated by the Guidance Automation Package. We can add code to update the View wherever we like. In fact I would recommend using active View in simple cases: passive View should only be used where we are putting too much logic into the View class.

Should We Use Model-View-Presenter for Every Screen? A Personal View

Let me also reiterate a point made in part 24. It’s easy to get obsessive about the use of patterns and use them everywhere without thinking. My personal opinion is that we should only use the full Model-View-Presenter pattern where we have a complex screen that will benefit from the separation of the View and Presenter classes. For very basic screens the pattern is really too complex to give us benefit. In simple cases I think it is fine to put event handling and screen update logic directly behind the screen.

Note that I don’t think this applies to the use of the Model. We should always separate out the business logic from our screens into separate classes (this is what Martin Fowler calls ‘Separated Presentation’). However, we frequently have screens that don’t show any business logic or business data, so we may not need a Model class either.

For example an About screen that just shows the system name and version won’t need separate View and Presenter classes, and probably won’t need anything in a Model class either.

Equally a screen that shows a read-only grid of currencies used in a trading system probably doesn’t need separate View and Presenter classes. In this case the currencies themselves should be in a Model class so that other screens can access them.

Implementation Details: What We’d Expect

If we examine the diagram above, we expect the Presenter to have a data member with type of our ITestView interface that it will use to access the View. We expect the View to implement the ITestView interface to allow this. We further expect the View to have a direct reference to the Presenter class (a data member), which it will use to invoke code relating to user events. We’d probably expect both the View and the Presenter classes to be created the first time the View is needed.

Implementation Details: the Base Presenter Class

The actual details of the implementation of the Presenter are a little unusual.

If we look at the code generated by the Guidance Automation Package we see that the TestViewPresenter above has been given its core functionality by inheriting from an abstract Presenter<TView> class. Remember that the generic ‘TView’ simply lets us provide a type whenever we use the Presenter class. Here we inherit from Presenter, and provide the type when we inherit:

    public partial class TestViewPresenter : Presenter<ITestView>
    {

This allows the base Presenter class to have a data member of type ITestView (which is what we expect), rather than it being directly in the TestViewPresenter class. Note that the base Presenter is in the Infrastructure.Interface project (which is one of the reasons why we have to use this pattern in a Smart Client application).

The base Presenter class exposes our ITestView data member publicly, contains a reference to our WorkItem, and has disposal code and a CloseView method. It also has virtual OnViewReady and OnViewSet methods. These get called when you’d expect from the name and let us respond at the appropriate times by overriding the methods in our TestViewPresenter class.

All the above functionality in the base Presenter class means that the derived TestViewPresenter class is basically empty when it is created. It is up to us to put logic in there to handle user events and complex view logic.

The TestView class is a normal user control. It implements ITestView and contains a reference to the TestViewPresenter as we’d expect. It also calls OnViewReady as appropriate (in the OnLoad event of the user control). Again other than this TestView is also basically empty.

Conclusion

This article has shown us how to set up Model-View-Presenter classes using the Smart Client Software Factory, and discussed some issues surrounding it.

Foundational Modules and Names as Constants (Intro to CAB/SCSF Part 21)

Introduction

Part 19 and part 20 of this series of articles looked at business modules in the Smart Client Software Factory.

This article looks at foundational modules briefly, and also discusses the pattern for handling names in the SCSF.

Foundational Modules

In part 18 of this series of articles we saw how we can use one of the Guidance Automation Packages to add a business module to our solution. There’s also a Guidance Automation package that lets us add a ‘foundational module’ to our solution. This is on the same submenu as the ‘Add Business Module (C#)’ option we saw above. So to add a foundational module to our solution right-click the solution folder in Solution Explorer, select Smart Client Factory/Add Foundational Module (C#) and then click through the two setup screens.

A foundational module is identical to a business module except that it does not have a root WorkItem. This is because it is intended to contain supporting functionality rather than core business functionality for our solution. It is not expected that we will create business objects and add them to the various WorkItem collections.

So we are expected to create fairly generic services and supporting code in foundational modules, rather than business code. Of course we could do this in the Infrastructure projects mentioned in part 18, but a foundational module allows us to separate out supporting code into its own CAB module.

Note that we can create an interface component, ‘Module.Interface’, for our foundational module in exactly the same way as for a business module. This allows other components in the solution to use the module’s functionality without referencing it directly, as described above.

Constants Folders

In our examples above we have seen several Constants folders being set up. The main Smart Client solution has a Constants folder in both the Infrastructure.Interface component and the Shell component. Both the foundational modules and the business modules have Constants folders in both their main Module components and their Module.Interface components.

The Constants folders all contain four classes: CommandNames, EventTopicNames, UIExtensionSiteNames, and WorkspaceNames. In the Constants folders mentioned above most of these are empty by default, although Infrastructure.Interface has some constants set up in its classes.

The important thing to notice here is that the individual classes with the same name are arranged in an inheritance hierarchy. So if we have a business module in our Smart Client solution (as in the code example we have already seen) then CommandNames in the Module itself inherits from CommandNames in Module.Interface, which in turn inherits from CommandNames in Infrastructure.Interface. CommandNames in Shell also inherits from Infrastructure.Interface.

The reason these classes exist is to allow us to use standard constants throughout our solution for names, rather than having to use strings. The inheritance hierarchy lets us define these constants at the correct level in the hierarchy, but then to use any of them very simply by just accessing the class at the level the code is at.

The reason we don’t want to use strings in our code as names is they are prone to error in entry (since we can’t use intellisense) and can’t be checked by the compiler at compile-time: if we enter a string name wrongly we will get a run-time error. If we use constants to represent these strings we avoid both of these problems.

This will be clearer in an example:

We might call a Workspace on our Shell form “LeftWorkspace” when we add it to the Workspaces collection of the root WorkItem. Elsewhere in the code we may want to retrieve that workspace and interact with it, for example to call the Workspace’s Show method to display a SmartPart. Normally to do this the syntax would be, for example:

_rootWorkItem.Workspaces["LeftWorkspace"].Show(control);

The obvious difficulty with this is that we are just entering the name “LeftWorkspace” as a string, which is prone to error and the compiler can’t check.

So we add the code below to the WorkspaceNames class in Infrastructure.Interface. We add it to the Infrastructure.Interface component because the Workspace is being defined in the Infrastructure part of the solution, but we want it to be available outside of that:

public const string LeftWorkspace = "LeftWorkspace";

Now suppose we want to use this Workspace name in code in a business module. The WorkspaceNames class in the business module inherits from the WorkspaceNames class in Infrastructure.Interface, and hence the constant is available in that class. All we need do is reference that class to access any Workspace name. So we just import the appropriate namespace:

using SmartClientDevelopmentSolution.Module1.Constants;

And then we can do:

_rootWorkItem.Workspaces[WorkspaceNames.LeftWorkspace].Show(control);

Now intellisense is available when we enter the ‘LeftWorkspace’ name, and the compiler can check that what we have entered is correct.

Note that if we have a Workspace name defined just for the module (say ‘LocalWorkspace’) we can still just do WorkspaceNames.LocalWorkspace to access it.

So these Constants folders provide us with an easy way of using named constants for items in the WorkItem hierarchy throughout our code.

SCSF Business Modules: Start Up and the ControlledWorkItem (Introduction to CAB/SCSF Part 20)

Introduction

Part 19 of this series of articles discussed business modules in a Smart Client solution generated using the Smart Client Software Factory. This article continues that discussion.

The Load Method of a Business Module

As discussed in the previous article, a business module has a class called ‘Module’ which inherits from class ModuleInit. We saw in part 1 of this series of articles that this means the Load method in that class will get called at start up, provided the module has been added to the ProfileCatalog file.

The Load method of Module generated by the Smart Client Software Factory is as below:

        public override void Load()
        {
            base.Load();
 
            ControlledWorkItem<ModuleController> workItem = _rootWorkItem.WorkItems.AddNew<ControlledWorkItem<ModuleController>>();
            workItem.Controller.Run();
        }

As we can see, it’s creating a ControlledWorkItem class instance and adding it to the WorkItems collection of the root WorkItem. It’s then calling the Run method on the Controller property of this WorkItem.

ControlledWorkItem

ControlledWorkItem is a class that inherits directly from WorkItem. So a ControlledWorkItem is a WorkItem. The ControlledWorkItem also adds additional functionality to the WorkItem, and, crucially, it is a sealed class (which means we can’t inherit from it).

The idea here is that each business module should have a ControlledWorkItem as a root for its functionality. This is what we are creating in the Load method. In the overall WorkItem hierarchy each business module ControlledWorkItem is immediately below the root WorkItem for the entire solution.

Inheriting WorkItem to add Functionality

The ControlledWorkItem has been created to clarify the situation with regard to adding code to WorkItems. When we start using the CAB we quickly find that we need our WorkItems to be extended in various ways. They are intended to control business use cases, after all. For example we may want specific services instantiated at start up and added to the Services collection. Doing this in the WorkItem itself may seem like a sensible thing to do. Clearly the main WorkItem class is a CAB framework class, but we can inherit from it to give it this additional behaviour.

The reference implementations of both the CAB and the SCSF do this: each WorkItem inherits from the base WorkItem class and extend it to give the use case functionality. If you look at the CustomerWorkItem in the Bank Teller Reference Implementation you’ll see this.

Why Inheriting from WorkItem has been Deprecated

The difficulty with this is that our WorkItem class is acting as both a container for all the various WorkItem collections, as we have discussed before, AND as a place where all the code for a business use case goes.

This breaks the Single Responsibility principle, which is that every class should have just one responsibility in a system to avoid confusion.

As a result the Patterns and Practices team have decided it’s not ideal to have developers inherit from WorkItem and add functionality to the derived class. Instead a second class is created to contain the new code, and that class is associated with the WorkItem class by composition.

How ControlledWorkItem Addresses the Problem

This is what the ControlledWorkItem is doing. The ControlledWorkItem class itself inherits from WorkItem, but also has a member variable that references another class. The type of this class is generic (so the developer provides it), and the class is instantiated when the ControlledWorkItem is created.

So in the line of code below we are creating the ControlledWorkItem and adding it to the root WorkItem’s WorkItems collection. However we are also telling the ControlledWorkItem that its member class should be of type ModuleController, and that class will get instantiated and set up as the member variable.

ControlledWorkItem<ModuleController> workItem = _rootWorkItem.WorkItems.AddNew<ControlledWorkItem<ModuleController>>();

We are not expected to inherit from ControlledWorkItem itself. In fact we can’t because it is sealed: the Patterns and Practices team have done this deliberately to indicate that the pattern has changed. Instead we add our additional functionality for the WorkItem to the ModuleController class.

ModuleController

We can access the ModuleController instance from the ControlledWorkItem using the Controller property. We can then call a Run method on that class. This is the standard pattern that is generated by the Guidance Automation Package: note that the final line in the Load method above is:

workItem.Controller.Run();

So we can add start up code for the WorkItem into the ModuleController class in the Run routine.

The SCSF gives us a default ModuleController whenever we set up a Module, as we have seen. This has a default Run method. There isn’t any code that actually does anything in this method, but four empty methods are set up in ModuleController to indicate to us the sort of things we should be doing:

    public class ModuleController : WorkItemController
    {
        public override void Run()
        {
            AddServices();
            ExtendMenu();
            ExtendToolStrip();
            AddViews();
        }
...

There are also comments in these routines to describe what we should be doing inthem. To see this in more detail look in any of the ModuleController classes in the sample code.

WorkItemController Class

Note also above that our default ModuleController inherits from a class called WorkItemController, which is an abstract base class intended to be used just of these controllers. Inheriting from this ensures that we have a Run method in our derived class as there is an abstract function of this name in the base class.

The base WorkItemController also gets a reference to the associated WorkItem using our usual dependency injection pattern. This can be accessed via the WorkItem property on the WorkItemController class.

Finally the WorkItemController class has two overloaded ShowViewInWorkspace methods, which can create and show a SmartPart in a named Workspace in the WorkItem.

Obviously we don’t have to make our ModuleController inherit from WorkItemController. However, if we don’t this base class functionality will not be available.

Conclusion

This article has discussed the standard patterns generated by the Smart Client Software Factory for starting up business (and other) modules.

Part 21 of this series of articles will look briefly at foundational modules, and will also discuss the way names are handling in Smart Client Software Factory projects.

Business Modules and Interfaces in the SCSF Smart Client Solution (Introduction to CAB/SCSF Part 19)

Introduction

Part 18 gave a brief introduction to the Smart Client Software Factory. This article continues that discussion by looking at business modules, and also examining how the various modules in a Smart Client solution are expected to interact.

Recap on the Smart Client Application

In part 18 we saw that a ‘Guidance Automation’ package in the Smart Client Software Factory lets you create a base solution for a smart client program. It sets up four projects, three of which are infrastructure projects.

One of the projects is an empty ‘Infrastructure.Module’ project. Infrastructure.Module is a CAB module as described earlier in this series of articles: it isn’t directly referenced by the other projects in the solution but can be used to write infrastructural code for the solution without any tight-coupling with the rest of the solution. We’ll examine this in a little more detail below.

Business Modules

It isn’t intended that we put business logic into the Infrastructure projects discussed above. Instead we are meant to create ‘business modules’.

To create a business module we use another of the Guidance Automation packages: we right-click the solution in Solution Explorer, select Smart Client Factory/Add Business Module (C#), click ‘OK’ in the ‘Add New Project’ window and then click ‘Finish’ in the ‘Add Business Module’ window.

This gives us two new projects in the solution with default names Module1 and Interface.Module1 as below:

scsfprojectmodule.jpg

Once again here Module1 is a Composite Application Block module, and is not referenced by any other project in the solution. However, Module1.dll IS added to the ProfileCatalog (which is in Shell). This means that the Load method of a class inheriting ModuleInit in Module1 will get called by the CAB at start up, as described in part 1 of this series of articles. The class with the Load method in Module1 is called ‘Module’. We’ll look at what the Load method is doing in the next article in this series.

Note here that the Module and ModuleController classes are identical to those in Infrastructure.Module. Note also that there’s really no code at all in Module1.Interface: there are just some empty classes in a folder called Constants.

Business Module Interaction with the Rest of the Smart Client Project

As discussed in part 1 of this series, a ‘module’ is a standalone project to be used in a composite user interface. So our business module here is intended to be a slice of business functionality that can potentially be developed independently of the other modules in the application. Because the business module isn’t directly referenced by other modules a separate development team could potentially work on it and change it. It can then in theory be plugged in to the containing framework without the need for code changes in the framework. The other project’s libraries might not even need to be recompiled since they don’t actually reference the business module directly.

Clearly in practice it’s likely that the business module will have to interact with the rest of the Smart Client solution on some level. There will be a need for:

  1. The business module to use the infrastructure components: for example it might need to put a toolstrip into the Shell form.
  2. Other components in the Smart Client solution to use some of the business module functionality. As a simple example we might have a business module that deals with customers and a back-end customer database. It might have screens to show customer data and allow updates. Another business module might want to display these screens in response to a request: an Orders module might allow a double-click on a customer name to show the customer.

We want to achieve the interaction described above in a way that’s as loosely-coupled as possible, so that we can change the system easily. To do this we make sure that all interaction is through the Interface projects.

We now examine each of these possible scenarios in more detail:

1. The Business Module Using Infrastructure Components

For this scenario in our example solution Module1 references Infrastructure.Interface directly. It is set up to do this by default when you add the business module to the solution. Note that Infrastructure.Interface is intended to (mainly) contain .NET interfaces: it is not meant to contain large amounts of code.

Note that Module1 does not reference Infrastructure.Module or Infrastructure.Library directly, nor should it under any circumstances. These projects may well be under the control of a separate development team from our business module team, and they may need to be updated independently of the business modules. So we reference the interface project, and that handles our interaction with the Infrastructure libraries.

This seems to be a concept that developers working on these projects have difficulty with: almost every member of my development team at work has added one of these libraries to a business module at some stage.

I think the confusion arises because it’s not necessarily obvious how we do this. If my module just references an interface how can I actually call any functionality using just the interface? The answer is that we are once again using the dependency inversion and dependency injection concepts described in part 3 and part 4 of this series of articles.

An example here may help.

Example

We’ll use the WorkspaceLocator service that the SCSF adds into the Infrastructure.Library component when we create a Smart Client solution. The WorkspaceLocator service lets you find the Workspace a SmartPart is being displayed in, although this isn’t relevant for this discussion: all we’re interested in is how to invoke the service from a business module.

There’s a class called WorkspaceLocator that actually does the work in SmartClientDevelopmentSolution.Infrastructure.Library.Services. There’s also an interface in Infrastructure.Interface as below:

namespace SmartClientDevelopmentSolution.Infrastructure.Interface.Services
{
    public interface IWorkspaceLocatorService
    {
        IWorkspace FindContainingWorkspace(WorkItem workItem, object smartPart);
    }
}

Note that Infrastructure.Library references Infrastructure.Interface and so WorkspaceLocator can implement this interface. Note also that our business module, Module1, also references Infrastructure.Interface but NOT Infrastructure.Library. So it can’t see the WorkspaceLocator class directly and thus can’t call FindContainingWorkspace on it directly. So how do we use the service?

The answer is that this is the standard CAB dependency inversion pattern using WorkItem containers to access objects.

At start up the solution creates an instance of the WorkspaceLocator service and adds it into the Services collection of the root WorkItem, referencing it by the type of the interface:

RootWorkItem.Services.AddNew<WorkspaceLocatorService, IWorkspaceLocatorService>();

This actually happens in the new SmartClientApplication class mentioned in part 18, but all we really need to know is that the service will be available on the root WorkItem.

Now, in our module we know we can get a reference to the root WorkItem in our new module by dependency injection in a class:

        private WorkItem _rootWorkItem;
 
        [InjectionConstructor]
        public Module([ServiceDependency] WorkItem rootWorkItem)
        {
            _rootWorkItem = rootWorkItem;
        }

Our module also knows about the IWorkspaceLocator interface since it references Infrastructure.Interface. So it can retrieve the WorkspaceLocator service object from the root WorkItem using the interface, and can then call the FindContainingWorkspace method on that object:

            IWorkspaceLocatorService locator = _rootWorkItem.Services.Get<IWorkspaceLocatorService>();
            IWorkspace wks = locator.FindContainingWorkspace(_rootWorkItem, control);
            MessageBox.Show("Workspace located: " + wks.ToString());

In summary, as long as our module knows the interface to the functionality it needs, and knows how to retrieve an object that implements that interface from a WorkItem collection of some kind, it doesn’t need to have direct access to the underlying class to use the object. This was explained in more detail in earlier articles in this series.

2. Other Components Using the Business Module Functionality

For other components to use our business module functionality we are expected to work out what functionality our business module should expose to the rest of the solution. We should then define interfaces that allow access to that functionality and put them into our Module1.Interface component.

Other components in the solution can then reference Module1.Interface and call the functionality. Note that to allow them to do this we need to ensure that the correct objects are available in a WorkItem, as described above. Once again other components should NOT reference Module1. We can then change Module1 without impacting the other components.

We may of course need to change the interfaces. In this case it may be sensible to retain the old version of the interface component itself so not all other components have to upgrade, and to add a new version with the changed interfaces in as well. The old interface can then be disabled when everyone has upgraded.

Conclusion

This article has examined modules in a Smart Client solution, and discussed how they should interact.

Part 20 of this series of articles will look in a little more detail at some of the new code structures in modules in a Smart Client solution.

Introduction to the Smart Client Software Factory (CAB/SCSF Part 18)

Introduction

So far in this series of articles the focus has been on the core functionality of the Composite Application Block (CAB). No mention has been made of the Smart Client Software Factory (SCSF), in spite of the fact that the series is entitled ‘An Introduction to the Composite Application Block and the Smart Client Software Factory’.

This article and the ones that follow will remedy that. They will discuss what the Smart Client Software Factory is, how it relates to the Composite Application Block, and how to use it.

Versions of the Composite Application Block and Smart Client Software Factory

The first version of the Composite Application Block was released in December 2005. This had all of the features that have been described in this series of articles so far. As was mentioned in part 1 of this series, the Composite Application Block on its own is quite difficult to understand and learn how to use. Furthermore, documentation and support in the initial version were somewhat lacking: many developers complained that they couldn’t understand how to use the new framework.

The Patterns and Practices team released a follow-up version in June 2006, and another in May 2007. However, they did not attempt to change the core Composite Application Block code. Instead they provided additional documentation and examples, as well as a way of automatically generating code for various useful patterns using the Composite Application Block. We’ll examine this in a little more detail below.

The idea behind these SCSF releases was to make it easier for developers both to learn and to use the Composite Application Block software (and other application blocks). However, as we shall see, the SCSF introduces new code and patterns on top of the CAB’s already complex structures, and the documentation could still be clearer. In many ways the Patterns and Practices team have added to the confusion with these releases rather than clarifying the situation.

Software Factories

The last two releases were branded as the ‘Smart Client Software Factory’. A ‘software factory’ is one of the latest computer industry buzzwords.

The idea behind software factories is that current software development practices depend on highly-skilled developers, who are similar to craftsmen in a pre-industrial age. The argument is that the need for highly-skilled craftsmen is the reason that many software projects fail: there are too few really good craftsmen, and they are usually hand-crafting from scratch in every new project. We need to ‘industrialize’ the software development process. Software should be created using ‘software factories’.

Like many computer industry buzzwords, what a ‘software factory’ is in practice is a little unclear. It tends to vary depending on the author, but in general a software factory should provide a means of producing code in a standard way. This may mean reusing and customizing existing code, simply following a set of guidance practices, or generating code automatically based on a model, which may be visual. It may mean a combination of all of these techniques. The phrase is also often used to describe model-driven development, possibly using domain-specific languages.

Smart Client Software Factory

Microsoft’s ‘software factory’ is slightly simpler than some of the usual definitions described above. The ‘Smart Client Software Factory’ comprises a small set of code generators plus some documentation and examples. The code generators are quite straightforward, producing code to set patterns (there’s no model to be maintained here).

The paragraph below is from the SCSF documentation:

‘The software factory provides a set of proven and integrated practices that are exposed through patterns, How-to topics, QuickStarts, a reference implementation, a Visual Studio Guidance Automation Toolkit package, and architecture documentation. The software factory guides projects through the development of smart client applications based on the architecture of the Composite User Interface Application Block.’

The Visual Studio Guidance Automation Toolkit Package

The Smart Client Software Factory includes a ‘Visual Studio Guidance Automation Toolkit Package’. This is an add-in to Visual Studio that automates certain software development tasks relating to the CAB. Some of these are added to the right-click context menu in Solution Explorer.

Probably the most useful Guidance Package, however, is the one that generates a new Smart Client Application project. This is also a good place to start when investigating what the Smart Client Software Factory can do.

This Guidance Package is available if you request a new project in the usual way inside Visual Studio after the SCSF has been installed. The New Project window has a project type of ‘Guidance Package’ available. Underneath that you can choose a new ‘Smart Client Application (C#)’:

newscsfproject.jpg

If you select this option the Guidance Package will show a screen that allows you to set some properties of your solution. In particular you need to tell the Package where the Composite Application Block libaries are, and to set some options. These include whether you want a separate module for layout of the Shell (in general you won’t want this). The Guidance Package will then set up a Smart Client solution as below:

scsfproject.jpg

The SmartClientDevelopmentSolution

This is a base solution for a smart client project using the Composite Application Block. It gives your code some basic structure. It also gives you a lot of code on top of the Composite Application Block code itself: the various projects contain about 3,000 lines of code in total.

So we have even more code to try to understand if we are to use the resulting project effectively.

As you can see we are given four basic projects:

1. Shell

This is the start-up project for the solution. It is very similar to the start-up projects in the sample applications we’ve already seen in this series of articles: it has a start-up class (ShellApplication) which (indirectly) inherits from FormShellApplication. It has a form, ShellForm, which is the containing window for the application. It has a ProfileCatalog which will contain the composite application modules to be loaded.

If you compare this the Naïve Application sample code from part 1 of this series of articles you will see the similarities.

However, there are subtle differences as well, mainly in the form of extra code constructs. For example, ShellApplication actually inherits from a class called ‘SmartClientApplication’ which in turn inherits from FormShellApplication. The SmartClientApplication class simply sets up some SCSF services.

Additionally the ProfileCatalog now allows us to specify dependencies. We also have a Constants folder with a series of class files in it. We’ll examine these extra code constructs in later articles.

2. Infrastructure.Interface

The remaining Infrastructure projects are intended to provide common functionality for all of the composite applications we add into our solution. However, we don’t want to directly reference the projects that contain the code. Instead we want to hide the code behind interfaces in a separate library and reference the interface library. Infrastructure.Interface is this interface library. It mainly contains interfaces, with some simple classes.

3. Infrastructure.Library

The Infrastructure.Library project contains the bulk of the new code that this SCSF solution gives us. In fact it contains about 2000 lines of code, including such things as the Action Catalog, and support for loading modules using sources other than ProfileCatalog.xml.

4. Infrastructure.Module

Infrastructure.Module is an empty module project. It is intended that any code that we as developers want to add to the infrastructure section of our solution will go in here. It is a CAB module, which we have seen before, and it contains a ModuleInit class as we’d expect (see part 1 of this series of articles). However, it also contains a ModuleController class, which inherits WorkItemController. This will be discussed further in the next article.

Conclusion

This article has given us an overview of the Smart Client Software Factory, and shown how one of it’s key components is the ability to create a standard solution that can be used as the basis for a Composite Application Block application.

Part 19 of this series will discuss this solution a little more, and will look at SCSF Business Modules.

References

Discussion of software factories:
http://www.methodsandtools.com/archive/archive.php?id=64

Workspace Types (Introduction to the CAB/SCSF Part 17)

Introduction

Part 16 of this series of articles explained in general terms why Workspaces are useful. It also examined the methods that are available on a Workspace via the IWorkspace interface.

There are five Workspace types provided with the Composite Application Block framework: DeckWorkspace, ZoneWorkspace, TabWorkspace, MdiWorkspace and WindowWorkspace. This article looks in a little more detail at these various Workspace types and associated SmartPartInfo types, and gives some code examples.

The ToolBox

Some Workspaces can be added to the Visual Studio toolbox as shown below. To do this you (as usual) right-click and select ‘Choose Items…’. Then click the ‘Browse…’ button, and browse to your Microsoft.Practices.CompositeUI.WinForms.dll library. If you click ‘OK’ to this the Workspaces shown should be added:

smartparttoolbox.jpg

DeckWorkspace

The DeckWorkspace is the first of the Workspace types provided with the CAB that we shall look at.

This is ‘deck’ as in ‘deck of cards’. When we show a SmartPart in a DeckWorkspace it fills the area of the Workspace completely. If we show another SmartPart it replaces the original SmartPart in the view completely. However, the old SmartPart is still there in the deck, immediately behind the original SmartPart. If we add a third SmartPart it gets displayed at the front of the deck, but the other two SmartParts are still there in order. If you close the third SmartPart the second one will get displayed.

A code example of use of a DeckWorkspace is available. The shell form for this CAB application has a DeckWorkspace on it. The project also contains two SmartParts, with red and blue backgrounds. A series of buttons allow the user to call methods on the DeckWorkspace, providing either the red or the blue SmartPart as an argument. The methods available are the IWorkspace methods discussed in part 16: Show, Hide, Activate, Close.

deckworkspace.jpg

This behaves as described in part 16. If we try to Activate, Close or Hide a SmartPart that hasn’t been Shown we get an exception. However if we Show the red SmartPart, then Show the blue SmartPart, then Activate the red SmartPart the red SmartPart that was originally shown gets shown again.

As shown above, the form also has a ‘Details’ button. This shows the current ActiveSmartPart, and the Items and Workspaces collections on the RootWorkItem. As a result it will show you which SmartParts are loaded at any given time.

To use a DeckWorkspace you can simply drag one onto a form and start coding. There is no specific SmartPartInfo class for the DeckWorkspace .

The DeckWorkspace is a useful type if you are building an Outlook-style interface. It can act as the main display area in the application, and gives us the behaviour we desire (which was outlined in part 16).

ZoneWorkspace

The ZoneWorkspace allows the user to define ‘zones’ or areas within the Workspace where SmartParts can be shown. As mentioned briefly in part 16 a SmartPartInfo object can be used to define which zone a SmartPart will be shown in .

The easiest way to set up a ZoneWorkspace is firstly to drag one onto your form or user control from the ToolBox. Then drag ordinary Windows Forms panels onto the ZoneWorkspace and position them to define your zones. If you look in their properties you will see that these panels have a ZoneName property where you should give your zones sensible names.

To show a SmartPart in a specific zone you need to tell it the zone via a SmartPartInfo object. The easiest way to do this is to drag a ZoneSmartPartInfo item from the Toolbox shown above onto your form or user control. You can make your ZoneSmartPartInfo reference a specific zone by setting ITS ZoneName property to the ZoneName of the zone. You can then show SmartParts in the zone by calling the Show method and passing the ZoneSmartPartInfo object as the second parameter.

A code example of this is available. Once again this has two SmartParts, red and blue. It also has two zones, left and right, as shown. Further it has two visual ZoneSmartPartInfo objects as discussed above, one of which references the left zone and one of which references the right zone. Which one of these is used when you click on the buttons is controlled by the radio buttons shown below.

The buttons are as above, apart from the Apply buttons. The Apply buttons call ApplySmartPartInfo on the ZoneWorkspace with the appropriate ZoneSmartPartInfo object depending on which of the radio buttons is selected. They can thus be used to change which zone a SmartPart is displayed in: if the red SmartPart is shown in the left zone (as shown below) we can check the ‘Right zone’ radio button and hit ‘Apply’ for the red SmartPart, which will move it to the right zone.

zoneworkspace.jpg

One thing to note about the ZoneWorkspace is that calling Show with a SmartPart does not necessarily bring that SmartPart to the front. The Show method adds the SmartPart to the appropriate collections, gives the SmartPart the focus and make the SmartPart into the ActiveSmartPart. However in spite of this the SmartPart can still be hiding behind another SmartPart.

For this reason DeckWorkspaces are often put into zones of ZoneWorkspaces, since a SmartPart will be brought to the front when Show is called on a DeckWorkspace.

TabWorkspace

TabWorkspace has the standard Windows Forms TabControl as a base class, and acts like a CAB version of that control. Every SmartPart that you add to the TabWorkspace gets shown in its own tab.

There’s a TabSmartPartInfo class we can use to set details on our SmartParts and tabs. This has a Title property which can be used to set the title of the TabPage the SmartPart is being displayed in. It also has a Position property. This can be set to TabPosition.Beginning or TabPosition.End, and affects where the new tab appears in relation to existing tabs.

There’s a code example that shows this, with the same buttons and red and blue SmartParts as in the previous examples. This has no tabs showing at start up. When we click the Show button for a SmartPart it calls the Show method for the TabWorkspace, passing the appropriate SmartPart as a parameter.

The Show method creates a new tab with the SmartPart displayed on it.

tabworkspace.jpg

In addition the example has visual TabSmartPartInfo components that are passed to the Show methods, one for the red SmartPart and one for the blue. These set the titles on the tab pages, and control where they are added.

This example also has ‘Apply’ buttons that let us change the titles of the tab pages from a value entered in a TextBox. The buttons do this by setting the Title property on the appropriate TabSmartPartInfo and then calling the ApplySmartPartInfo method.

The SmartPartInfo you give to the Show method implements ISmartPartInfo and has no additional members.

MdiWorkspace

The MdiWorkspace allows each SmartPart to be displayed in a separate child window inside a parent Form. This allows us to build MDI applications using the Composite Application Block.

One thing to note about the MdiWorkspace is that, as with other Workspaces, our child SmartParts have to be User Controls. We can’t display child Forms inside an MdiWorkspace with the code as it stands. This is in spite of the fact that our children look and behave like Forms: they have title bars and maximize and minimize buttons, for example.

MdiWorkspaces can use the WindowSmartPartInfo type to give additional information about SmartParts that are being displayed. As usual, this has a Title property. It also has properties to determine whether our MDI child is modal, whether it displays minimize, maximize and/or control boxes, and its location.

Once again a code example is available, and this behaves in the same way as the examples above. This includes allowing us to change the titles by entering some text in a TextBox and clicking ‘Apply’, which calls ApplySmartPartInfo as for the TabWorkspace above.

mdiworkspace.jpg

WindowWorkspace

A WindowWorkspace lets us display our SmartParts in floating windows, each one in a separate window. Once again it uses the WindowSmartPartInfo type to give additional information about SmartParts that are being displayed.

A code example is available. This is extremely similar to the MdiWorkspace example above, except that obviously it deals with a WindowWorkspace and not an MdiWorkspace.

windowworkspace.jpg

Conclusion

In this article we have reviewed the various types of Workspace that are available in the CAB framework, and have given some code examples.

So far in this series of articles we have only looked at the Composite Application Block. Part 18 of the series will rectify this by taking a first look at the Smart Client Software Factory.