Trading is a competitive business. You need great people and great technology, of course, but also trading strategies that make money. Where do those strategies come from? In this post we’ll discuss how the interplay of data, math and technology informs how we develop and run strategies.

Machine learning (ML) at Jane Street begins, unsurprisingly, with data. We collect and store around 2.3TB of market data every day. Hidden in those petabytes of data are the relationships and statistical regularities which inform the models inside our strategies. But it’s not just awesome models. ML work in a production environment like Jane Street’s involves many interconnected pieces:

  • Get the data: Building the infrastructure to gather, store, index and retrieve this amount of data efficiently and with microsecond-level accuracy is itself an interesting job. We have a whole team dedicated to this important work. If we fail to log data at any point then that data is gone, never to return.

  • Clean the data: The raw data that we receive is frequently missing, corrupted, misaligned, or has other issues. Before we can deploy any modeling techniques, we need to sanitize the data. This is a crucially important part of the process, unavoidable but admittedly tedious. 1

  • Explore the data: It’s hard to know what techniques to throw at a problem before we understand what the data looks like, and indeed figure out what data to use. Spending the time to visualize and understand the structure of the problem helps pick the right modeling tools for the job. Plus, pretty plots are catnip to traders and researchers!

  • Leverage domain expertise: The more you know about the problem you are trying to solve, the greater your ability to build good models. This comes up in many ways throughout the process: choice of objective function, reasonable approximations, and the algorithm used to solve it. Image models often have translation invariance, for example, while financial models often have low signal to noise ratios and a lot of game theoretic priors. Expertise like this is hard-won, resulting from many previous successful and unsuccessful efforts. 2

  • Build a model: This is the part that gets everyone excited in ML. However, we’ve found that standard techniques almost never work out-of-the-box. The more you understand about what makes an algorithm work or fail, the more likely you are to come up with effective ways to modify it and make it work on the problem at hand. Or come up with something entirely new!

  • Validate the model: There is no shortage of ways to fool yourself when building ML systems, especially in a competitive world like trading. Some of the most exciting parts of the process come when a new ML system shows, with high probability, that it’s better than what we had in the past. That’s how we know we’re making real progress.

  • Deploy the model: There is a lot of interesting work when deploying a new model, work that makes the difference between a cool idea and one that actually makes money. It’s important to run it efficiently and reliably, of course, but also to ensure that a predictive model’s mistakes aren’t catastrophic. What’s more, once you start trading the market will adapt to your strategy, making your model less effective over time. More confusingly, if you’re not careful you may enter a bad feedback loop where the next model you build looks at your own current trades as evidence that “the market agrees that I should trade here”! Issues like these make applying ML to trading a very challenging problem.

Over the years we’ve used a variety of ML techniques: Gaussian processes, random forests, adaptive regression splines, and genetic algorithms among others. Lately our use of deep learning ideas has been growing. These ideas (such as very-high-parameter models, backprop-based stochastic gradient descent, etc) have taken the world by storm in the last 5 years, and rightly so given the exciting results achieved across a wide variety of domains. Particularly interesting is that, with a few corner-case exceptions, the world doesn’t yet understand why these techniques generalize as well as they do. This makes deep learning techniques exciting to think about, and our work in this area has led to some strategies that we currently use in production. Deep learning is a large, exciting and occasionally confusing area of ML, and we’re optimistic about what we’ll be able to learn and invent in this area.

Nevertheless, the world of ML is much larger and richer than deep learning. If there’s anything Kaggle competitions have taught the world, it’s that the best solutions combine a variety of approaches in often ad-hoc and messy ways. We know that the financial world doesn’t present clean problems: the human world is complex and ever-changing. That’s why Jane Street is committed to seeking out, inventing, developing and using the best possible tools in our trading. We believe that if we are not continually pushing on the boundaries of what is technically and intellectually possible we will very quickly stop being competitive. The excitement is in chasing new ideas and putting those ideas in action in this competitive environment. This is a big part of what makes Jane Street such an interesting place to work.

Over the next few months on this blog, we’ll be examining interesting problems and examples of ML as it pertains to trading. So stay tuned!


  1. Among the things we’ve seen: missing data, seemingly valid market data appearing when markets are closed, data for the wrong stock, frozen/delayed/intermittent data, etc.
  2. An example: modeling the difference in price between an ETF and its basket as a Gaussian would be a mistake, since there are well-defined arbitrage bounds on this difference.