Mondo Visione Worldwide Financial Markets Intelligence

FTSE Mondo Visione Exchanges Index:

Web Services: Digital Data Feeds For The Web, and the Financial Enterprise

Date 07/06/2004

By Pete Harris, Lighthouse Partners

If you remember financial information services like Quotron and Telerate, then you’ll also remember the early days of digital data feeds for the trading room, and all that discussion about how green screens were all very well, but digital feeds were more powerful allowing data to be manipulated.  Now the Web is going through the same kind of evolution.  Whereas the Web as we all know it has suited us humans very well, it’s not been an appropriate technology for computer-to-computer communication.  That’s where Web Services come in.

Let’s Start With Some Basics

One of the big problems with the old digital data feeds was that they were all different.  Remember Telerate SOP – for Standard Output Protocol?  It was about as standard as the plugs on the back of an Apple Mac.  But just as Apple is now fitting its toys (iPod – got to love it) with Firewire connectors, so Web Services have adopted standards, based on the eXtensible Markup Language (XML).

Of course, in the past, several mainstream IT attempts have been made to develop so-called ‘object models’ to allow distributed computers to interact.  Microsoft developed COM+, the Unix world backed CORBA, and the Java guys went for, well, Java.  The result was ‘islands’ of connectivity, where the CORBA systems couldn’t talk to the COM systems, and vice versa.

But Web Services are based on a ‘stack’ of protocols that, as well as being XML-based, are also being administered by standards bodies, not by the computer giants (although they do get to have a say).  Thus, new Web Services-compliant platforms, like Microsoft’s .NET, Sun’s Java Enterprise Software, IBM WebSphere and BEA WebLogic can all use Web Services to communicate with one another, in a standard way.

Some things never change, of course.  Web Services have begat a bundle of new geek words, such as SOAP, WSDL and UDDI.  These are the elements that make up the Web Services stack, and if you don’t know what they are all about, then you might as well be using a green screen.  So here’s a quick Web Services 101:

- Data elements that are transmitted by Web Services are coded in XML.  Possibly the XML will conform to one of the standard financial dialects, like FIXML, or MDDL.

- These XML data payloads are wrapped into a message structure that conforms to the Simple Object Access Protocol (SOAP) standard.

- A computer that offers a Web Service describes the functions it provides to other computers using Web Services Definition Language (WSDL).   Other computers send SOAP messages with WSDL commands to find out how to interact with the provider of the service.

- A computer that offers a Web Service ‘advertises’ itself to the world by registering itself in directories, using the Universal Description, Discovery and Integration (UDDI) standard.

Here’s an example of how Web Services might work in practice in our world:

- A brokerage firm offers various data, analytics and transactional services to its clients.  It creates all of these services as Web Services and registers them all in its own UDDI directory.  It might provide clients with some software to connect to these Web Services, or they might write their own, or use a third party package.

- Clients of the brokerage are provided with the URL of the UDDI server, and connect to it using HTTP.  They discover what services are available using UDDI, and then their software uses WSDL to figure out how to interact with those services.

- Finally, application level SOAP messages, with XML data payloads, are exchanged to deliver the actual functionality.

If this sounds a bit complex, remember that once a services provider or a client is set up to work with Web Services, it is very simple to use them to add new services, or connect to new services.  So integration – once done one time – becomes pretty easy.

But It’s Not All Plain Sailing

The two key standards organizations involved in the development of Web Services are the Worldwide Web Consortium (W3C) and OASIS – the Organization for the Advancement of Structured Information Standards.  And while there is some overlap between the activities of these groups, there is also increasing cooperation as well.  And that bodes well for the formulation of a single set of standards on which to build Web Services architectures.

Thus far, the W3C has taken the lead on defining the lower levels of the Web Services stack – XML, SOAP and WSDL.  And many vendors have made very credible attempts to develop real products that leverage them.  But as is the case with many standards, there are shortcomings in the details that lead to different assumptions at implementation time.  Fortunately, groups like the Web Services Interoperability Organization have emerged to encourage vendors to achieve conformance of implementations.

In the real world, however, there are other issues to deal with besides non-conformance.  For example, in the wholesale financial markets, the concept of UDDI in its broadest sense – as a Yellow Pages of services to be subscribed to on-demand – is pretty much science fiction.  Financial firms are used to dealing with a fairly small set of partners, where relationships built up over many years are all important.  And increasing compliance requirements mean it’s important for financial firms to know who they are dealing with.

All this means that Web Services are likely to be used for internal integration projects long before they are exposed outside the firewall.   Here, the recent downward pressure on IT spending has hindered the rollout of Web Services, although the TCO savings that they can deliver will probably win through now that budgets are freeing up.  Expect legacy middleware players like IBM with MQ Series, TIBCO Software and the JMS crowd like Sonic Software and SpiritSoft to increasingly embrace Web Services – partly as a defensive move to ensure that firms don’t rip out their legacy plumbing in favor of purely HTTP transports.

Another issue with exposing Web Services to business partners is security - encryption, authentication, and the like.  Until Web Services are safe to deliver confidential information, and can be trusted to transact high value business, they’ll go nowhere.  Once again, it’s IBM and Microsoft (together with Verisign) that are making the running, by developing the WS-Security standard.  As with UDDI, ownership of the specification now rests with OASIS, which oversees work on other security standards, such as SAML (Security Assertion Markup Language) and XrML for rights management.

Even within the enterprise, implementing complex, distributed, Web Services-based systems is not without issues.  Management of distributed Web Services processes – to monitor for resource constraints, fault conditions, and exceptions – will be essential if they are to be used for mission critical applications.   Early work in the management space has been headed by startups like Infravio and Corporate Oxygen, although it’s clear that the big boys of the IT management world – IBM with Tivoli, HP with OpenView and Computer Associates with UniCenter – will be increasingly focused on Web Services management.

Integrating the Business

With core standards – XML, SOAP, WSDL, UDDI – now largely set, the focus has moved on to address higher levels of functionality that underpin business processes and transactions.

For the financial markets world, making sense of these higher level standards will be essential if Web Services are to play a role in applications, such as electronic trading, STP and order routing.

In order to link Web Services into existing business processes, a mechanism needs to be agreed on how to define complete processes to include them – enter choreography and orchestration.  While some use the terms interchangeably, strictly speaking choreography refers to how Web Services interact, while orchestration takes that concept further by introducing the concept of interactions in a certain order, i.e. a workflow.

Work in this area has mainly been pushed forward by both the W3C and OASIS. In practice, the W3C has taken a broader, more academic, approach to creating and evolving standards, whereas OASIS has a more pragmatic, vendor-friendly, and faster-track approach.

The W3C kicked off its Web Services Choreography group (WS-Chor) in January 2003, basing its efforts on work previously submitted by BEA, Italio, Sun and SAP.   In doing so it bypassed efforts on orchestration by IBM, Microsoft and BEA to develop BPEL4WS – Business Process Execution Language for Web Services – which was taken up under the OASIS banner as the Web Services Business Process Execution Language (WS-BPEL).  Given the heavy hitters on the side of WS-BPEL, it’s hard to bet against it as emerging as the dominant standard in the defacto world.

Making Sense of Information

The ability of Web Services to integrate disparate computing processes – and the information that is associated with them – is not only a way forward for transactional applications, like trading systems, but also for information and analytic services.

Everyone seems to be going wild about Google these days, as it apparently offers the best search engine.  That might be so, but Google, Yahoo and the rest of the search crowd all suffer from the same basic problem – lack of context in searches.  Example: Type in “FIX” and the search results throw back Web sites for companies repairing Apple Macs, the official Stevie Nicks fan club (she rocks!), and the FIX Protocol Organization – which was what I was really looking for (honest).

The problem is that the Web as it exists today is really still view only with tantalizing data – just like those old green screens that were mentioned at the beginning of this article.  Search engines can match words, and do a good enough job of fuzzy terms, but they have no idea what the words actually mean, hence it’s pot luck whether they find the information that one’s really looking for.

That’s why there is a move to evolve the Web into a resource that does understand meaning, to create what’s been termed the Semantic Web.  Leading this charge is none other than (Sir) Tim Berners-Lee, the original creator of the Web, and now director of the (W3C).  To quote from the great man himself: "The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation."

So how will this be achieved?  Both XML and Web Services will be key.  A specific XML-based technology called the Resource Description Framework (RDF) will be used to insert ‘tags’ into Web-based data that can be read and understood by computers.  These tags and their content will be controlled through the use of data dictionaries – or ontologies – that specifically define what tags are and can be used for.  So, on a Moneyline page, they might identify a ‘price’ as being a ‘tradeable’ price, from a ‘source’ that is called ‘Brokertec’.  No room for confusion there.

Does all this sound like good old elementized digital feeds?  Well, it should, since the concept is not a whole lot different.  What’s different is the standards-based approach that the Web has had as its philosophy from the outset.

And leveraging Web Services will allow for interoperability – one day the Semantic Web could even solve the securities symbology problem – although no doubt the debate will then move to what the ontology fields should be called.  I mean, a world without the FISD symbology working group would surely be too fanciful to consider.

For sure, the Semantic Web is still in its infancy, being pushed to date largely by academics working within the W3C framework.  But already commercial work involving Semantic Web concepts is underway, with startups like Enigmatec, Ontoprise and Imorph doing useful work under the covers (and NDAs).

Take the whole concept of meaningful data, processing interoperability, and business process definition a stage further and one comes to a Star Trek-like world, where computers talk to other computers and do tasks automatically, without requiring any human intervention.

Thus is the basis for so-called Autonomic Computing – where computer systems – possibly in-vogue Grids of processors running as Blade Servers – determine their status and move to resolve technical issues (like memory use, disk use, loading) before they become a problem.  Computers that don’t go wrong.  Now that would be a result!

Pete Harris has worked as a technologist in the financial markets for more than 25 years as a software developer, technical project director, journalist and marketing executive. He is also actively engaged in e-Commerce ventures in the independent music community.