A lot of energy has been put into finding novel data sets that might give investors an edge. But an area that is sometimes overlooked is the use of asset managers’ own data sets, to reduce trading costs. This has become a source of alpha for some firms.
There have been significant recent advances in the sophistication of data analysis techniques, and when used to analyse trading data in a systematic, scientific way, they often identify substantial cost saving opportunities. If an asset manager is not collecting all of its trading and execution data across all asset classes, it is doing itself a huge disservice. Every time a request-for-quote is submitted to a dealer, and a piece of data comes back that isn’t recorded, the firm has lost valuable information.
Is this only for ‘quant funds’?No. What was once the purview of the most sophisticated quants is now the baseline. Today, nearly all asset managers have the opportunity to drill into their trading data and identify material costs savings.
But quantitative analysis is essential. It is the only way to understand the increasingly complex and opaque order routing systems used today.
Aren’t firms doing this anyway as part of their MiFID II obligations (in the EU) and ‘best execution’ (in the US)?Many firms across both markets do the bare minimum – they have a transaction cost analysis (TCA) product for equities trading, and they produce quarterly reports to ensure they’re meeting ‘best execution’ standards. These new regulations have certainly been the catalyst for many firms to start looking deeper into their execution practices. In the EU, MiFID II places a specific obligation on firms to check the fairness of prices proposed to clients when executing orders, and in the US, best execution is one of the top priorities of the Securities and Exchange Commission (SEC).
What savings can be expected? It depends on the current level of sophistication and what the asset class is.
For a firm trading equities, which is already implementing some of the analysis basics, a conservative estimate would be a reduction in execution costs of around five to eight basis points (0.05–0.08%). For firms just starting this journey, savings of 10–20 basis points is not an unreasonable estimate.
Savings, measured in basis points, are usually a bit smaller in fixed income. But trading values in this asset class are typically higher than equities so the actual value of savings can be much higher.
Is there a best asset class to start with? As before, it depends on what you’re currently doing and where your trading activity is concentrated. Equities has the most data available and the best off-the-shelf offerings, but if you’re already looking at quarterly TCA data, you may have already made some easy changes.
Fixed income is an area where there is often ‘low-hanging fruit’. Increasing amounts of good quality data are available, which can be analysed to find some easy-to-implement savings. And there is generally less market efficiency in non-equities asset classes, so there are more opportunities.
Longer-term, there are even bigger opportunities in asset classes such as ‘harder to price’ derivatives or thinly priced corporate bonds. While traditionally there has not been a lot of data for corporate bonds, there are new offerings in the market that have started to address this. Firms that collect and analyse all of their trading data for these classes, combined with newly available data products, can achieve significant cost savings.
What is a practical first step?Benchmark current trading costs against best practice.
Find a specialist in the field, someone with experience who understands not only how to look at these costs quantitatively, but who can also build a close relationship with the trading desk, as there are a lot of nuances behind the numbers.
About the expert
David Lauer is managing partner at Mile 59, a finance and tech consultancy firm. He is a specialist in the technical design of trading systems and exchanges. He advises asset managers on best execution and the evaluation of algorithmic trading strategies; has testified before the Senate Banking Committee, the SEC and the CFTC in the US; is an independent director of the Aequitas NEO exchange in Canada; and is chairman and co-founder of the Healthy Markets Association – a non-profit coalition of asset managers working to promote data-driven reforms to market structure.
Let’s say you look at the costs of broker A versus broker B, and broker A is always more expensive. If the analysis is done in isolation, the conclusion might be that all trades should be routed to broker B. But because broker A is so good, the trading desk has been sending it the most difficult-to-fill orders, with broker B only being asked to execute the easier-to-fill orders.
A quant needs to be able to bring these complexities in to build a historical baseline of costs.
Once that is done, it is best to collapse all of the information into a small set of charts and metrics that are meaningful, monitor those over time, and start making small behavioural changes to routing decisions that can reduce costs.
What do asset managers typically end up changing?The way they manage broker relationships and the algorithms they use to execute trades.
We encourage firms to develop broker scorecards to capture the holistic relationship, where execution cost is a major element, but it isn’t the only one. The scorecard will show which brokers are best at which types of trades, and also the value of other services, that might be indirectly paid for through broker commissions, such as research. With MiFID II, this is less important in the EU now, but it is still important in other countries. It’s also a good way of identifying if ‘value added’ broker services are really worth it and negotiating better terms.
You shouldn’t discount the value of letting the broker know execution costs are being looked at in detail. It’s not an exaggeration to say that performance improves as soon as the broker is asked for data. I don’t know how that magically happens, but it happens, regularly enough.
When it comes to equities algorithms themselves, firms typically eliminate a few underperforming algorithms. But we don’t really find that there are good algorithms or bad algorithms, it’s more a case of some algorithms being good at some things and worse at others, or performing well under certain conditions and poorly under others.
The value comes from knowing which algorithms to use under what conditions, and when not to use algorithms at all.
For example, if there is a large trade to be executed, an asset manager needs to know if it is best to pick up the phone to a broker for a ‘full service experience’ with a capital commitment to clear a block trade. Or would it be best to split the trade up and get lots of smaller fills using an algorithm?
In other asset classes, there are other factors to focus on – such as counterparty liquidity or toxicity. As I said before, the first step is collecting the right kind of data. Once you have a statistically significant dataset, you can start to examine broker relationships across asset classes, and may decide that some don’t make sense for certain asset classes or trade types.
Is it mostly a ‘one-time’ saving, or do improvements continue over the long term? Once the first steps have been completed and initial savings have been captured, there is another whole level of detail that can be explored.
Some of the more in-depth statistical and machine learning studies we’ve done end up as substantial research projects. But doing this can save so many basis points that most firms find it is well worth it. We’ve also built experimentation frameworks that allow trading desks to scientifically measure the impact of routing changes, which gets far more exact and granular than the rough initial analysis.
Look out for our special report on algorithmic trading's impact on flash crashes in the upcoming Q3 2018 print edition of The Review.