top of page

Machine Learning in Investment Management

Albert Einstein was right if he actually said, “The only source of knowledge is experience.” The quote sums up the frustration of every ambitious young analyst. The lesson? Gray hair will prevail. There simply is no substitute for experience in the investment business. Right? Well, perhaps not anymore. A new technology might be about to give gray hair a pay cut. It’s one of the hottest threads of artificial intelligence (AI) research and it is known as Continual Learning (CL). CL enables machines to accumulate knowledge over time and then learn how to apply that knowledge to make better decisions in the future. It may prove to be the single most disruptive technology for investment management. But how does this new technology compare to the old? Is it mature enough to use in a live investment process? And who is behind the innovation?


Biased Recollections

Human analysts and traditional quants — yes, quants — suffer from many behavioral biases. Perhaps the most fundamental are those that affect our own knowledge: What knowledge to accumulate and how to use that knowledge to best guide future investment decisions. Judgment, in other words. As the English mentalist, illusionist, and writer Derren Brown, observed, “We are, each of us, a product of stories we tell ourselves […] allowing us to arrange complicated reality into a tidy parcel”. Our own stories subjectively drive our investment decisions, whether the “tidy parcel” is based on the subjectivity of Eugene Fama and Kenneth French and results in another highly stylized factor model, or comprises an investment narrative distorted by the “groupthink” of an investment committee meeting.

There has to be a more objective approach to building persistent knowledge to apply when the past rhymes with the present? AI might now offer a solution.


Stock Picker: Human or Machine?

Recent AI research challenges the primacy of the most important pool of knowledge in the investment business: human experience. This AI research area, CL, objectively accumulates investment knowledge, perhaps better than humans do. Persistent synthetic knowledge could thus outlast corporate succession or provide a body of objective experience for all, thereby disrupting the businesses of traditional passive and active investment managers alike. We will get into the details of CL, but first it’s helpful to demonstrate how a well-conceived AI investment strategy should work in practice.

Machine Learning in Asset Management

All fundamental investment methodologies should approach each investment decision from multiple perspectives, adapt and evolve as realities change over time, and provide understandable explanations for each decision. Both traditional (i.e. human analyst driven) and AI-driven fundamental investment strategies should meet these criteria. But perhaps the chief advantages AI ought to have over traditional fundamental methods are objectivity and consistency. The graphic below illustrates where AI-driven investment strategies should exceed their traditional fundamental investing counterparts (in blue). CL now extends these advantages to the objective accumulation and use of knowledge itself.

Which type of fundamental, artificial intelligence (AI) or human?


1990s tech plods on.

While experienced investment managers may have subjective memories of past events, the very best of them overcome these through discipline and the learned application of that knowledge — that is, good judgement. However, quant strategies, almost all of which rely on equity factor models, tend to suffer the worst of all worlds. These quant models have no explicit memory and those who deploy them frequently dismiss exogenous causation, owing to their own confirmation biases. As soon as a market event leaves the sliding window used to train one of these models, it is forgotten forever. If we have nothing to learn from past crises and missed opportunities, we should stick with the traditional factor quant solutions of the 1990s. But this hardly makes sense in a world of vastly more and better data where AI offers a potential means to analyze that data for objective inferences.


2019: life-long-machine learning in the markets

So how does CL work? At December’s Neural Information Processing Systems Conference (NeurIPS), top AI researchers presented cutting-edge innovations and CL’s application to finance was an important part of this. In the past, scholars have generally explored theoretical methods of building synthetic knowledge. This year, our team[1] from City, University of London, presented a system that empowers machines to exercise synthetic judgment by acquiring knowledge and then applying it to guide investment decisions. It is called continual learning augmentation (CLA) and it is a new methodology in the field and the first application to financial markets. A senior member of the team, and leading AI researcher, Professor Artur d’Avila Garcez commented “CL has been partially achieved in more sterile environments but we believe this is the first time it has been successfully applied to the noisy, non-stationary real world of financial time-series”. The system manages synthetic knowledge by learning which events are worth remembering (or ignoring) and which are less useful and best forgotten. At the same time, this knowledge is selectively recalled to enhance stock selection decisions in the present. The architecture of this system is simplified in the illustration below.


Learning to remember: artificial knowledge


The memories are human readable (not a black box) and tend to apply to important financial events. The last decade or more of financial history was replayed and saw many important memories form in the CLA system. The most interesting memories were the lead-up to the subprime crisis, the “quant quake,” the post-quantitative easing (QE) era, and the (first) euro-zone crisis. Models that appeared to best identify good (and bad) investments during these periods were stored as memories that could be recalled when current events seemed to echo past ones. For example, the approach recalled the QE-driven recovery in 2009 and identified this knowledge as the most pertinent to apply in stock-selection decisions during another stimulus driven stock market rally that resulted in China in 2017.


Continual Learning: A Short and Intense History

Where did these ideas originate? Knowledge accumulation is vital to general intelligence and is a new and major focus of advanced AI research with the ultimate aim of allowing lifelong learning. CL differs from deep learning and other forms of AI, which tend to focus on isolated snapshots of information, say, identifying faces on Facebook. CL can be directed at a continuous stream of information from which it extracts knowledge over time. Typically in machine learning, once time steps on and a new model is learned, the old model is forgotten. Deep learning it may be, but intelligent it is not.

Quantum Machine Learning

According to CL pioneer Danny Silver, research into CL commenced in the 1980s out of a desire to construct knowledge-accumulating machines. By the late 1990s, “gated” approaches, such as Sepp Hochreiter and Jurgen Schmidhuber’s long short-term memory (LSTM), were introduced to learn sequences — words in a passage of text, for example. Following the renaissance in neural computing in the latter years of the last decade, the development of the impractical but sophisticated differentiable neural computer (DNC) was a big step forward. Engineered by Alex Graves and his team at DeepMind, DNC overcame the “catastrophic forgetting” that undermined simpler methods. Yet DNC had its drawbacks: It mostly dealt with pet problems, learning to navigate the London Underground, for example, as well as more complex but stylized machine-learning tasks. DNC was too unwieldy to be easily applied, so researchers sought to refine it or looked for simpler solutions, some with neurological imperatives. For example, one way to synthetically form long-term memories is through elastic weight consolidation (EWC). EWC attempts to replicate the hypothesized plasticity of synaptic connections in the mammalian brain. An analogy is how a child learns to ride a bike: wobbly at first, but as skills develop with practice, neural pathways are slowly stamped into the brain. Once learned this knowledge is difficult to forget and can be augmented if the child graduates to mountain biking, say, or transferred if they opt for a unicycle. Simulating this effect with technology has proven challenging. Fortunately applications in finance offer a far more simple (and parsimonious) approach. Which brings us to the current state of the science. Today, CL is moving so quickly that research must be checked on a weekly basis to keep abreast of developments.


AI comes of age.

Building investment knowledge over time used to be an exclusively human capability. No longer. While we are still a long, long way from a general artificial intelligence singularity, AI as a driver of fundamental investing has come of age. Few industries are more ripe for disruption than equities investment management in 2019. Crowded 1990s-era factor quant models are still in demand, while the recent explosion in high-quality data, coupled with the technology to make sense of it, has opened up new vistas. Things are changing fast, and the next generation of tech-fluent professionals coming into finance are poised to displace the gray hair and the outdated. Einstein may have been correct when he (supposedly) equated knowledge with experience. But did he anticipate his comments would apply to the machines of the future? The future is now.


[1] Daniel Philps, Tillman Weyde, Artur d’Avila Garcez, Roy Batchelor

bottom of page