Mondo Visione Worldwide Financial Markets Intelligence

FTSE Mondo Visione Exchanges Index:

Liquidity and latency: Nanoseconds anyone?

Date 20/08/2007

Jeff Wells
VP Product Management, Exegy Inc.

Four years ago at the World Financial Information Conference, I pointed out that we were faced with a data deluge or tsunami. This was especially worrying because the growth rate of market data was faster than the likely speed improvement of the commodity Intel microchip. And therefore the question was: how would the markets of the world keep up?

It turns out that the markets have kept up, and some. The players are different of course. The old style membership club exchanges are going the way of the dodo. The main exchanges are now for-profit, publicly traded, and taking over other venues. They are interested in consolidation not only in order to capture the liquidity but also to leverage capital to the maximum in order to re-invest in the best possible technology. And the technology impulse is to get to the fastest possible and lowest latency solution.

In the United States, the NYSE and Archipelago union is exactly about liquidity and latency. Archipelago was well known for its exemplary technology as well as its excellence in the NASDAQ market space. At the same time, NASDAQ is developing the methods and expertise of Instinet/Island technology and is beginning to take on the Big Board in the listed market. They are almost mirror images of each other.

As the US exchange superpowers battle in the homeland, they also know that they need to outflank their opponents overseas. It will be very interesting to see the cross-fertilisation of technology across the Atlantic.

The traders’ requirement for speed isn’t new of course. But the absolute numbers and measurement of the most precious commodity – time – have changed enormously. The ‘latency’ of information has moved from weeks to days, to hours to seconds to milliseconds. We are just at the moment in history when microseconds are becoming significant.

When the Persians battled the Athenians in 490 BC, runners carried the news across to the Hellenic city states. Herodotus did not report on market prices in his famous history, but one can only imagine that the price of foodstuffs, metals and loans must have fluctuated enormously, and if you knew the news and best prices first, there were profits to be made.

In 1850 the founder of Reuters was using carrier pigeons to get news and stock prices between Brussels and Germany six hours faster than the railroad. Of course the pigeons were only a stop gap until the cable connection between Brussels and Germany was completed.

In the 1990s, traders in the pits of Chicago and London were the focal point of a long line of pre-trade analysis and orders were executed as fast as possible. The placement of booths relative to the floor was especially important so that orders could be placed swiftly from order clerk to the trader, signaled by hand or shouting. In retrospect, this was also just an interim step until computers with trading algorithms were linked in a new speed of light web of electronic exchanges and execution venues.

There are still important exchanges with significant human interaction of course, but so many quotes and order books today are held and distributed electronically.

Trading and quoting by computer has brought many benefits in terms of speed of execution, tightness of spreads and transparency. But the counterpoint has been the enormous wave of data pushed out by the bourses as they vie for supremacy as pools of liquidity.

Fortunately, we have been able to ride one continuous wave of computing improvements since around 1990, when the Intel chip and the alliance with Windows brought commodity computing benefits all round, including to the household.

In fact, we are at another inflexion point for the computer since the enormous volumes of data are well beyond the capability of single machines. Data rates are indeed growing much faster than CPUs are getting faster. The simple solution has been to buy racks of servers and run them in parallel, but this has become increasingly expensive. And more importantly there is a latency toll to pay when the data is spread across so many machines. I’ll explain later how the latest breakthroughs in computing will mitigate this problem.

The quest for zero latency

Nowadays the question is: what is the latency between machines and data centres? How large are your pipes? How many hops? What is the throughput of your computers? What kind of messaging do you use? What type of routers? And a more fundamental question is – where is your data centre? Each of these areas is a specialism even within the information technology field. Very few individuals can successfully navigate all these realms. In fact there is now a Chicago-based firm, Securities Technology Analysis Center, that caters to the tricky question of measuring latency and capacity capabilities. (And it is not easy measuring anything that is measured in microseconds.) But banks, exchanges and traders do need to know which technologies are fastest and in what combinations they work best. After all, in markets, getting the message to the trading venue second just doesn’t work.

The explosion of data centres and electronic trading itself has been a much more pronounced phenomenon in America where there are many exchanges and alternative trading systems in direct competition with each other under the National Market System. It is highly likely that Europe won’t be far behind, especially given the introduction of the EU’s Markets in Financial Instruments Directive (MiFID). Regardless of where they are, traders quickly figure out how the system works, and always look to profit from their ability to process data faster.

Up until 2002/3 latency was more or less seen as the flip side to capacity. In other words, if data flow went up, then latency was expected to suffer. At that time latency was also still measured in seconds and sometimes in milliseconds. It was a milestone of sorts when Reuters introduced millisecond time stamps on their US stock displays.

Figure 1

An unfortunate consequence of higher volumes is a well-established tradeoff between throughput and latency. This matters because data latency has a huge impact on the overall speed with which a trading firm can execute a transaction in response to new information.

North American venues still produce the most traffic, but many observers expect MiFID to stimulate a sharp increase in European traffic as the number of trade-reporting venues proliferates. Behind the scenes and on top of this, large sell-side institutions often generate enormous amounts of real-time data internally, which they pump onto their internal market data system. The traffic from internal content sometimes exceeds that of information coming in from external sources.

Since firms can profit from as little as one millisecond of advantage over competitors, they are driven to find sub-millisecond optimisations of the systems fueling their trades. In fact, having got down to ultra low latency levels, some leading firms are reportedly just as focused on reducing the dispersion of latency – that is, increasing the predictability of latency – in order to provide their algorithms with a more stable playing field.

Tsunami or asteroid?

As stated, back in 2003 I likened the data deluge to a tsunami, but with the benefit of hind sight we can see that this was the wrong metaphor. Firstly, the more we look ahead the more it seems that the wave won’t actually crest. The growth rate of peaks in market data looks likely to be in excess of 100% per annum for the foreseeable future. Indeed, in 2004, my industry colleague Danny Moore of Wombat Financial Systems was prescient in pointing out that message peaks would likely continue to double every year and that I had been too conservative. And as far as I can tell, Danny Moore’s Law prevails these days. (Gordon Moore’s somewhat more famous pronouncement is expected to hit a wall due to the laws of physics, co-incidentally).

My fear that the computer would not be able to keep up has also proven to be incorrect. Up to this point IT staffs have been able to buy large sets of blade servers and room at various data centres round the world, not to mention vast amounts of bandwidth. Cisco, Solace and Tervela have all pioneered important innovations in the hardware space around routing time-sensitive messages more effectively, and 29 West has enhanced and re-engineered the message itself.

And with the computer itself, electrical engineering scientists have conjured up various novel ways around the CPU bottleneck problem by developing parallel computing. In the case of FPGA co-processors there is a method called ‘massive parallel processing’. My own firm Exegy was able to demonstrate a single machine coping with 1 million messages per second at under 150 microseconds latency at the SIA Management and Technology Conference in June 2006.

So, nearly four years later a lot has changed. People still complain about overall market data rates but there is also a widespread discussion of latency. The data rates are now in the realm of 200,000 messages per second (from the United States). Latency is increasingly expressed in terms of milliseconds and microseconds.

It is easier to see now that once the automation genie was out of the bottle it became inevitable that the competition would be based on ultra low latency. So it is now pretty clear that tsunami was the wrong metaphor since it implies a return to a previous state after the wave has passed. In fact, as we look at the bigger picture, my guess is that we are right at the take-off point of the information age in the microcosm known as the market data industry. Computers will continue to stimulate the creation of enormous amounts of data in mind boggling volumes, and there just won’t be any let up in this revolution.

Craving capacity

Just to put this revolution in perspective from a capacity point of view, ten years ago in the US we saw peaks of 900 messages per second from the listed, Nasdaq and equity options markets in total. These peaks generally occurred at the market open. In 2006 we saw high peaks of around 178,000 messages per second. This is a 19,700% increase. But we will shortly look upon 2006 data rates as mundane.

Assuming the now standard 100% annual growth rate continues, in 2007 we’ll see 356,000 messages per second at a US market open. And then, assuming 100% again, we should see 1,424,000 messages per second in 2009 and presumably about 5.6 million messages per second in 2011. At this rate we would see 91 million messages per second in 2015.

Normally we think of the message rates as primarily a bandwidth problem. It is also a storage problem. On 20 October 2005 the OPRA (Options Price Reporting Authority) markets pushed out over a billion messages over the course of a trading day for the first time ever, mostly comprised of quotes. At the time of writing, OPRA has said that it will be able to send out up to 6 billion messages over the course of a day in January 2008.

Underlying markets drive derivatives, but derivatives tail is huge

The underlying US stock markets have also been growing very fast, although the absolute numbers look tame alongside OPRA. Nasdaq and the NYSE/listed markets have been steadily increasing their offerings to meet the challenge of competition in the Reg NMS world. The Consolidated Quotation System, for example, pumps out 10,000 messages per second, NYSE Archipelago 30,000 and NASDAQ’s Totalview 35,000. The NYSE Hybrid system has probably got a lot more to kick in it as the NYSE progressively automates and presumably adopts some of Archipelago’s technology.

In the field of bandwidth itself, OPRA tends to set the bar. Because of this the options market unfairly gets all the headlines for market data output in absolute terms. In fact, the rate of growth has now been in lockstep with the underlying market for the last couple of years.

It is worth explaining why the options markets are so voluminous. The US equity options market, at the time of writing, has about 766,000 extant contracts (dividing up by exchange). So it is hardly surprising that when the underlying market opens afresh each day there is a very large blip of quote updates. In January 2008 OPRA will be capable of producing 716,000 messages per second. Direct feed recipients will need to provide for 449 megabits per second, including a 10% quota for retransmissions. An OC-12 data pipe will be required to receive all the data.

(((pic 2)))

Source: Financial Information Forum

And where is your data centre sir?

With such large bandwidth demands, these are good times for the likes of Savvis and BT Radianz as well as players such as Yipes and Colt. For Savvis and BT Radianz, and perhaps Orange and Deutsche Telekom, the fact that their data centres either house or are in physical proximity to the big execution venues has become an important selling point. Indeed, behind the scenes there has already been a gold rush for data centre rack space. And nirvana for algorithmic trading engines is the place with least router hops to the best venues.

In this regard, both Europe and Asia are very different to the US. In America the clustering is very much in the New York area, although Chicago is also significant. But in the old world and the ‘new east’ liquidity centres are distant and hyper-pools and low latency data zones have not yet formed.

It didn’t happen overnight in the US. When Nasdaq first brought the stock trading world into the electronic era by allowing remote market making, the market makers were spread across the United States. Firms like Montgomery in San Francisco developed strong relations with start-ups in Silicon Valley, brought them to an IPO and naturally made markets. The screens were bulletin boards and driven by actual traders changing their positions, mostly manually. They telephoned each other to deal. But today the speed of light is a barrier to trading firms based in San Francisco. It is more difficult to be effective when you are not first, so the machinery driving the US stock markets is now clustered more tightly around New York. It is 2,900 miles from New York to San Francisco. Since the speed of light is 670,616,299 mph, it takes at least 4 micro seconds to get an order or quote across the continent. And that is too long.

Not to be left behind, the European regulators have created the ideal conditions for the creation of even more data by introducing MiFID. Traders and financial institutions will be publishing much more than hitherto as OTC trading comes into the public domain. They are also obliged to store all contextual data such as email, instant messaging and even voice interactions with clients in certain cases. Regulators in the Netherlands, the UK and Scandinavia will quite likely take a dim view of financial institutions unable to archive and effectively retrieve the relevant records.

High performance computing comes to Wall Street

Because of the enormous demands, not to mention the deeper pockets of the successful market participants, the latest computer platform technology is already entering the mainstream. Traders at hedge funds and proprietary traders are the fastest adopters but exchanges and vendors are not far behind.

Hardware engineers are currently working on dual processors, parallel processing, horizontal scaling and the like. As they do so, there are new demands on software to take advantage of these architectural shifts. This in turn presents some tough questions for CTOs in the exchange, data vendor and trading communities. They will have to choose technologies that will be robust, economic and obtain reasonable shelf life. And the CTOs have to make decisions fast lest they be overtaken by their competition.

Interestingly enough, those new technologies are available since advanced computer science research suggested just this sort of problem a few years back. Indeed, with the right engineering skills, core business logic can be embedded at the hardware level to take full advantage of the latest chip sets. Furthermore, vast amounts of data can be accessed for very high level analytics on a single appliance taking in millions of messages per second. So you can have your cake and eat it too.

The idea is that instead of taking each computer instruction in sequence into a single processor, the instructions can be taken simultaneously in parallel. Computer scientists and engineers use the term ‘massive parallelism’ to describe the approach. So effectively, you have super computing power on one machine.

It sounds like a godsend of course but there are important caveats. Computer code has normally been written in C-based languages. Certainly the overwhelming body of work in the financial world is in C and its variants. Unfortunately the C language presupposes a sequential instruction set. And therefore existing code bases running to many millions of lines cannot simply be transposed over to run faster on the new machines.

There are firms such as DRC Computer that specialise in plug-in co-processors, in this case to support the AMD Opteron chip. It seems highly likely that these will become very popular. And XtremeData Inc. announced on 17 April 2007 a similar add-on for Intel chips. Both co-processors will likely offer speed enhancements to existing code but do not take full advantage of potential parallelism straight out of the box.

At present there are no automated tools that allow for the examination of code and its recompilation to take full advantage of the latest hardware. But there are companies with the market domain knowledge and hardware and software skills. These firms will likely focus on parts of the trade cycle that will yield profit to those who need to be first and who can’t afford to be second.

It conclusion, it seems, that with Reg NMS, MiFID and penny pricing of options all hitting the ground in a 12 month period, CEOs and CTOs should probably be thinking in terms of asteroids, not tsunamis. Yes there is a data deluge, but there are also entirely new ways of dealing with the data that will totally eliminate old methods of architecting for this problem. Indeed, the technology and business leadership will have to be bold in order to take full advantage; but the good news is that the choices are available.

In the meantime, please accept my apologies for leading anyone down the wrong path with the tsunami concept. And don’t forget to look out for that asteroid.

In addition to his role with Exegy Inc, Jeff Wells is co-chair of the FIF Market Data Capacity Working Group in St Louis.