Wednesday, 16 November 2016

Crazy CAT approved by SEC

Another crazy cat (source
On the 15th of November, yesterday, the SEC finalised and released its nine hundred and seventy nine page Order approving the Consolidated Audit Trail (CAT) Plan. An important context is that the CAT plan was "inspired" somewhat by the so called Flash Crash of May 6, 2010. You no doubt know the old, almost true adage, if you can't measure, you can't manage. The CAT plan is about addressing the measuring as otherwise the regulators can't manage. Sounds reasonable.

Here is the press release from the SEC, and here is the final report considering the comments and meetings around the SEC's RFC. The Financial Times covered the CAT Plan Order with brief comments from Larry Tabb and Dave Lauer.

My thoughts? In summary, it's absurd.

Let's meander through why someone of my lack of stature may misinterpret such good intentions.

At a high level, would it allow a proper and thorough analysis of the May 2010 Flash Crash? No.

Why not? It doesn't cover the necessary financial instruments of concern that day. Even if the CAT did cover the necessary instruments, the plan does not have enough determinism in the event ordering to be able to reasonably argue about a system where one million events a second are not uncommon.

Generously you may think, well, at least they are trying and it is at least a start? That may be a nice, homely sentiment but the fact is that the predicted costs are kind of OUTRAGEOUS for the lack of return. Back in 2014 Scott Patterson and Bradley Hope, then of the WSJ, reported,
"The project is expected to cost between $350 million and $1 billion, according to people familiar with the details."
From the CAT web site, apart from a decidedly disappointing lack of cat pictures, there is a indicative plan for annual revenue to be charge to venues and broker/dealers, which I summarise in the following table:


Venue Tier       # Firms Min fee Max fee Extended min Extended max







Equity venue T1 14.0 140.0 240.0 1,960.0 3,360.0

T2 39.0 100.0 170.0 3,900.0 6,630.0
Option venue T1 9.0 180.0 290.0 1,620.0 2,610.0

T2 3.0 130.0 220.0 390.0 660.0
Broker/dealer T1 55.0 176.0 300.0 9,680.0 16,500.0

T2 55.0 90.0 150.0 4,950.0 8,250.0

T3 90.0 40.0 70.0 3,600.0 6,300.0

T4 125.0 30.0 50.0 3,750.0 6,250.0

T5 300.0 5.0 8.0 1,500.0 2,400.0

T6 360.0 1.0 2.0 360.0 720.0

T7 815.0 0.2 0.3 163.0 244.5










Total annual fee range
$ 31,873.0 $ 53,924.5
*all $ figures in thousands






So, for $300M to $1,000M in up front costs, plus recurring costs of $30-60M a year, you get a decrepit, one-eyed king with a drunk stagger, stagger, roll in the land of blind regulators. Sadly, it is your typical racially-neutral, non-specifically gendered elephant designed by a committee which was distractedly thinking about bike sheds. Considering the stupidity and the scale of the proposed plan, the costs are actually not that outrageous. A plan that might actually work and yield beneficial outcomes comes out, on the back of my envelope, at around $50M up front with around $5M a year in recurring costs to be covered. I guess that is why vendors love committees. Meet the spec and bank the money.

Quite a nice bike shed. (source)
I'm not quite sure who at the SEC quoted these costs to the FT in the aforementioned article,
"The SEC estimates the system will cost $2.4bn initially and then $1.7bn a year to run."
$1.7B a year? That seems a little crazy, like a CAT. Surely not?

Most of the burden falls on market participants rather than being direct costs to the Plan operator or SEC. From the KCG comment letter,
source: KCG comment letter
So, the SEC thinks $4.1 billion for set-up and the first year of operation. Crazy CAT.

So what is wrong with the plan? I'm not sure I'm qualified to address all the bits and pieces as I haven't read thoroughly the thousands of pages of proposals and submissions, but hey, this is the Internet. Here are three bits I don't like much which the SEC did indeed consider in some detail.

Coverage


Early in the morning of the Flash Crash on 6th of May 2010, there was pressure coming into the market from many fronts. The pressure in the foreign exchange markets was noted by many, including the SEC & CFTC relating the USD and JPY with premium spikes in CDS, Greek sovereign risk in particular. An execution of 75,000 E-minis on CME pinned on Waddell Reed's volume participation algo, plus spoofing by the likes of Navinder Singh Sarao, copped some curious posthumous blame from authorities.

Are you with me? Yeah, you got it. Not a mention of any instruments covered in the NMS CAT plan at all there. CDS, bonds, FX, futures... all not covered. Certainly chaos in equities and option markets ensued. This is yet another reason why the CFTC and SEC should be combined.

The SEC's final Nov 15 report covered coverage from pages 341 after a Primary Market Transaction segue,
"Rule 613 and the CAT NMS Plan do not require the reporting of audit trail data on the trading of futures. One commenter, noting that the CAT NMS Plan does not require any information about stock index futures or options on index futures, stated that incorporating futures data into CAT would “create a more comprehensive audit trail, which would further enhance the SROs’ and Commission’s surveillance programs.”
As noted above, the Participants, within six months of the CAT NMS Plan’s approval by the Commission, will provide the Discussion Document that will include a discussion of how additional securities and transactions could be incorporated into CAT. In their response, the Participants recognized that “the reporting of additional asset classes and types of transactions is important for cross-market surveillance.” Further, the Participants stated their belief that the Commission also recognizes “the importance of gradually expanding the scope of the CAT,” and cited the Adopting Release, wherein the Commission directed the Commission Staff “to work with the SROs, the CFTC staff, and other regulators and market participants to determine how other asset classes, such as futures, might be added to the consolidated audit trail.” Accordingly, the Participants stated that they intend to assess whether it would be appropriate to expand the scope of the CAT to include futures, at a later date."

The SEC concluded, importantly, on page 342,
"The Commission believes that the omission of futures data from the CAT NMS Plan is reasonable, particularly in light of limitations on the Commission’s jurisdiction."
Ugh.

Appropriately time-stamped CFTC venues, for products such as such as listed futures, swaps, option instruments should be captured along with a significant portion of OTC deals, including bonds, to get an appropriate global market vista. A regulator should have the power to either tap the same sources of data as market data vendors, or be able to regulate that time-stamped data needs to be provided to the Plan for non-commercial purposes.

It's not rocket science. Crazy CAT.

Error rate


The SEC decided an error rate of 5% was a good enough target. If you were having a coffee with your data guy and she reported to you that only 5% of your data was corrupt, you'd be in the unfortunate position of paying for the dry cleaning of her shirt that you just spat your coffee on.

From the press release,
"The CAT NMS plan would set an initial maximum error rate of five percent for data reported to the central repository, subject to quality assurance testing, adjustments at each initial launch date for CAT reporters and periodic review by the operating committee.  The CAT NMS plan also discusses a phased approach to lowering the maximum error rate for data reported to the central repository."
The mind boggles. The insomniacs amongst you can read some of the discussion around this from page 342 in the SEC order such as page 347's,
"The Commission believes that the proposed 5% initial maximum Error Rate is reasonable and strikes an appropriate balance between: (1) ensuring that the initial submissions to the Central Repository by CAT Reporters are sufficiently accurate for regulatory use; and (2) providing CAT Reporters with time to adjust to the new more comprehensive regulatory reporting mechanism. The Commission understands that the Participants considered relevant historical information related to OATS reporting error rates, particularly when new reporting requirements were introduced, and believes this is a reasonable basis for setting the initial maximum Error Rates for CAT Data."
Put your hand up if you think a maximum 5% error rate is sufficient for regulatory use? Yeah, neither do I. The market ticks less than 5% of the time, so errors on every market tick would be OK under the CAT Plan. Crazy CAT.


Determinism through timing



One of the objectives of the CAT is to have some analytical capabilities to examine significant NMS events. This implies a causal relationship being search for. To determine causes, as such, you would prefer causal ordering, otherwise known as virtual synchrony, across the system which is impossible. So you are coming from a point of view that you cannot have systematic determinism but you'd like to get as close to that as is viable with a reasonable cost effectiveness in mind.

One way of doing that is to have accurate time-stamps across the entire system so you can put everything in the right order. That is a bit messier than it seems. Given you might have 1-10M events in a not so unusual second, you may think you can order things if you have a corresponding 1 or 0.1 microsecond time-stamp. That is not really the case. Micro-bursts are a big feature in markets where low latency switches queue up competing packets that arrive in the same nanosecond or multi-nanosecond time-slice. What you do know, however, is the processing order the matching auctions deal with and this gives you the determinism you need if you have a valid order processing model for an exchange.

It is important to consider accuracy and precision. You must draw on the distinction between the two concepts and note that neither is the same as significant digits or granularity in a representation. The precision within a venue will allow you to order events better at that venue. At a little tech start-up I used to run, the team developed a timing precision of around 1-20 picoseconds, less than 0.0002 microseconds with relatively inexpensive off the shelf FPGA hardware. Part of that was the Picosecond Over Ethernet Timing (POET) project which was demo'd on 1G Ethernet. It was fun stuff where we could pour cold water over an optical fibre and reliably record the corresponding physical shrinkage via timing differences. This is similar in many ways to the independent CERN White Rabbit project. The point being it is not so expensive to do extremely precise timing anymore. The CAT doesn't need it. A relatively inexpensive off the shelf solution lets you timestamp many comms links with 1 nanosecond precision with only a five nanosecond bump in the wire. 5 nanoseconds is not too much to impose in overhead. Technology for precision has moved, and it is not outrageously expensive.
Accuracy and precision

Accuracy is a harder proposition. The specification for GPS is 100ns of accuracy. This is a bound with reality being a lot better in practice. Even sub nanosecond accuracy is possible with careful statistical GPS interpretation. Such statistical interpretation is the kind of algorithmic approach that is useful for measuring precise distances for infrastructure movement surveys, such as dam walls. Xiaoguang Luo's excellent PhD research, "GPS Stochastic Modelling: Signal Quality Measures and ARMA Processes" uses ARMA modelling to effectively handle site-specific multipath effects, satellite geometry, and variable atmospheric conditions and is also a good reference for many existing state of the art GPS tuning adjustments - well worth a read.

Endrun CDMA timer - accuracy ~10 micros
If you can't use GPS directly, then you can pilfer a GPS measurement embedded in a mobile phone network's Code Division Multiple Access (CDMA) signals in various countries, such as the US, Japan, and Korea. The CDMA phone protocol uses an embedded GPS time signal for timing magic. End-run Technologies has a device, I have used in a few countries, to get accurate enough time stamps. No, you don't need to pay to subscribe to the network. The GPS numbers are floating about in the Ether for you. The CDMA standard says things should be good for seven microseconds of accuracy,
IS-95 requires that the frequency and epoch are synchronous (epoch usually derived by simply counting down the frequency), and that the epoch be no more than 7 microseconds in error relative to UTC time (sometimes GPS time is specified)
In Korea one microsecond was achieved for me and it was around six microseconds in Canada from my hazy memory. This is very convenient if your mobile phone works in a data centre. Especially if roof or window access for GPS is impossible, or prohibitively expensive, for the additional accuracy.

The point to this meandering about timing is that it is pretty easy to time-stamp packets to 10 microseconds of accuracy in the US. If you mandate GPS and hardware then 100ns of accuracy is trivial and not outrageously expensive. You can go below 1ns of accuracy but that is not trivial.

MetaMux 48 - 1ns precision, 5ns tap bump in the wire latency
Precision wise, with hardware, 20 nanoseconds of precision in time-stamping is normal, with one nanosecond precision being good (such as with a Metamako MetaConnect/Mux/App), and less then 0.02 nanoseconds of precision is not so hard, and a lot of fun, if you want to set yourself a stretch goal.

For the CAT, the SEC talks somewhat imprecisely about timing with three numbers, 100 microseconds of accuracy for the CAT participants, think stock and option venues, and 50,000 microseconds for brokers and dealers, with 1,000,000 microseconds, yeah a whole second, for manually entered orders. The SEC refers to precision of the time-stamps from participants being at least 1,000 microseconds for participants or better if it is convenient to co-opt better internal precision. That is kind of ass about relative to accuracy with 100,000 nanoseconds of accuracy and 1,000,000 nanoseconds of precision. It shows an unfortunate lack of understanding of the problem domain. You should probably do better if you're thinking of spending a billion dollars.

If your impressive stamina has led you to still be reading, you can read the details of the SEC's thinking from page 360 onward,
"The Participants, however, represented that they all currently operate pursuant to a clock synchronization standard that is within 100 microseconds of the time maintained by NIST, at least with respect to their electronic systems. Accordingly, the Participants recommended that the Commission amend the Plan to require that Participants adhere to the 100 microsecond standard of clock synchronization with regard to their electronic systems, but not their manual systems, such as the manual systems operated on the trading floor, manual order entry devices, and certain other systems."
100 microseconds of accuracy is not unreasonable but a little disappointing for stock and option exchanges given than 100 nanoseconds of accuracy is reasonably trivial. As this famous clip from Grace Hopper below shows, microseconds and nanoseconds are a little different to each other. Even an Admiral or General could understand the difference with a little grace from Grace. So should an SEC decision maker.



However, continuing onto page 365,
"For the initial implementation of the CAT, however, the Commission believes a 50millisecond clock synchronization standard for Industry Members is reasonable at this time. While the Commission believes that regulators’ ability to sequence orders accurately in certain cases could improve if the clock synchronization for Industry Members were finer, the Commission is sensitive to the costs associated with requiring a finer clock synchronization for Industry Members at this time, and believes that a standard of 50 milliseconds for Industry Members will allow regulators to sequence orders and events with a level of accuracy that is acceptable for the initial phases of CAT reporting."
That is, for Industry Members 50 milliseconds, or 50,000,000 nanoseconds is OK. Such a long time seems a cruel joke. A message can go from Boston to LA and back in less time. Good luck with your NMS determinism now. Fifty milliseconds may represent an order of a million reportable events for you to untangle. The SEC's confidence in sequence ordering seems a little misplaced. That said, large swathes of such would be reasonably ordered as they will not only fall under the 100 microseconds of accuracy banner but the precision and expected venue oriented monotonic symbol ordering should provide some descrambling comfort. Especially with respect to exchanges that use Nasdaq systems that tend to have nanosecond precision on their time-stamps. So, it is pretty bad, though not quite as diabolic as it seems. Just a bit diabolical.

The other point is those manual orders with one second accuracy should also be caught in electronic transactions for the pitiless pit less communities. So, that is not so bad either, if they can be linked up.

The SEC plan refers to something approximating but not quite precision as "Timestamp Granularity" from page 369,
"Specifically, the Plan requires CAT Reporters to record and report the time of each Reportable Event using timestamps reflecting current industry standards (which must be at least to the millisecond) or, if a CAT Reporter uses timestamps in increments finer than milliseconds, such provides that such events must be recorded in increments up to and including one second, provided that CAT Reporters record and report the time the event is captured electronically in an order handling and execution system (“Electronic Capture Time”) in milliseconds (“Manual Order Event Approach”)." 
Resolution of representation and precision is being confused here. Precision of one millisecond, or 1,000,000 nanoseconds is a bit of a joke, though some exchanges, such as Nasdaq will be obliged to report improved time-stamps, except when they might not have to if they can argue an undue burden by waving their hands in the direction of their order handling and execution systems. Here is the clarification from page 374,
"In response to the commenters that stated it would be costly for CAT Reporters to report using timestamps to the same granularity they use in their normal practice, the Commission believes it is appropriate to make a clarifying change to the Plan. The CAT NMS Plan provides that to the extent that any CAT Reporter utilizes timestamps in increments finer than one millisecond such CAT Reporter must utilize such finer increment when reporting CAT Data to the Central Repository. Rule 613(d)(3), however, required that a finer increment must be used only to the extent that “the relevant order handling and execution systems of any CAT Reporter utilizes timestamps finer that a millisecond.” Accordingly, the Commission is
amending Section 6.8(b) of the Plan to limit the circumstances in which a CAT Reporter must report using an increment finer than a millisecond to when a CAT Reporter utilizes a finer increment for its order handling and execution systems. The Commission finds that, this modification is appropriate in light of the increased burdens placed on CAT Reporters by the additional systems changes that would otherwise be required in order to report in finer increments. With this modification, reporting in a finer increment than a millisecond would not be a costly undertaking, and the Commission therefore believes that this approach will improve the accuracy of order event records, particularly those occurring rapidly across multiple markets, without imposing undue burdens on market participants."
This is all a bit silly. It would not have been unreasonable to stick with UTC time-stamps to the nanosecond of granularity and a precision of 20 nanoseconds with venue monotonic ordering guarantees for at least up to Tier #2, but it should be at least Tier #4. The accuracy for such venues or institutions should be GPS-like and be of order of 100ns, which is relatively trivial. Just put a hardware time-stamper with GPS sync in your processing stream on the network and post-process it into the correct CAT Plan formatted file for transfer overnight.

It would have also been useful to add embedded timing traces to the inter-party protocols, over time, so that ordering between sources and destinations could be better managed leading to a much improved picture of determinism. In that regard, the CAT should at least capture one second oriented timing bounces between parties, including the CAT data centre,  so time-stamp reconciliations and error corrections are possible along with clock accuracy and precision audits.

For the CAT Plan, the focus should have been on determinism in the NMS processes to be best extent. Much of that may have been solved by just adding timing at the trade matching pools or processes rather than bothering brokers or traders with additional costs. Embedded event sequencing on client messages would be nice to tie together client event sequencing for NMS purposes quite simply. That is, I'm unconvinced that adding timing to all the NMS clients is indeed useful for NMS purposes. It is undoubtedly useful for broker or institutional client handling audits, but the two different audit concepts should not be confounded.

There is so much improvement that could have been achieved with a more thoughtful approach. If you have deterministic ordering at a venue and appropriate modelling of the order handling then the outputs of the models, the quotes and transactions flying out, should be reproducible. Only excellent exchanges would do this, so many exchanges will not have such determinism baked in to their systems, but it should be a goal for the SEC to drive that modelling through as much as possible for the best possible outcome. Perhaps the SEC should even mandate a simple algorithmic model for new order types to be available publicly? That would be a real boon to the industry, especially in understanding the hundreds of obscure order types and their intricate rules. Personally I'd prefer it if they'd ban everything expect simple limit orders ;-) Mortal human beings have a hard time understanding the couple of hundred order types that exist at all trading venues. If you think this article is long and boring, go read all the order types specifications and I'll visit you in your padded market structure institutional cell.

Crazy CAT.

Conclusion


The bottom line is that the SEC seems to have lacked the expertise to push forward a proper mandate to build a truly useful CAT Plan. I certainly feel they have been bushwhacked a little by an industry committed to making the CAT contain as little value as they can get away with.  The CAT Plan could have been much cheaper, simpler, and better. As it stands, it would have helped a little, but not much, for the 2010 Flash Crash and it is of questionable value to the industry given its expected costs.

As an HFT type, I'm glad there will be less understanding as that means profitable opportunities are more likely to persist for longer. Understanding comes to those that pay attention to the details. A lack of understanding in a crisis is the unfortunate, cynical reality of this CAT Plan.  For the industry, and the good of the USA, it seems that a great opportunity for NMS insight and improvement has been missed.

Happy trading,

--Matt.

No comments:

Post a Comment