Summer has a way of getting away from you. That is as much relevant for blog writing as it is for life. Nonetheless, before summer ends we wanted to dust off our series on regime prediction and close the loop on the remaining techniques we had yet to investigate. That is, in our last post we initiated a relatively simple rolling method to retrain the model on more near term (and perhaps more relevant) data.
Our last post finished up examining the three different methods used to predict market regimes in the Gold Miners ETF, GDX – namely, clustering, Gaussian Mixture Methods (GMMs), and Hidden Markov Models (HMMs). We found GMMs performed the best in terms of proof-of-concept. But there was a lot of work to do to go from backtest to viable trading strategy.
In the next few posts, we’ll look at some of the ways we can improve our backtests.
We conclude our discussion of market regime detection by examining Hidden Markov Models (HMMs). Recall this series was inspired by a post from PyQuant News that highlighted a longer article from the London Stock Exchange Group (LSEG).
Those who took the CFA exams probably forgot using HMMs in the quant section. Whatever the case, the intuition behind them is clever. HMMs use observable data to infer non-observable data, or hidden states.
Our previous post, used hierarchical clustering to identify market regimes in the gold miners ETF, GDX. This was inspired by a post from PyQuant News that highlighted a longer article from the London Stock Exchange Group (LSEG). In this post, we’ll continue looking at identifying market regimes and using those predictions as signals for a simple trading strategy.
As noted, the LSEG article showed three different machine learning methods to segregate regimes: clustering, Gaussian Mixture Models (GMMs), and Hidden Markov Models (HMMs).
We recently saw a post from PyQuant News that piqued our interest, compelling us to dust off the old blog files and get back into the saddle. The post highlights a longer article from the London Stock Exchange Group (LSEG) on how to use different machine learning models to identify and forecast market regimes. That article uses Refinitiv, a market data service like Bloomberg, which we don’t have access to.
We’re taking a short break from neural networks to return to portfolio optimization. Our last posts in the portfolio series discussed risk-constrained optimization. Before that we examined satisificing vs. mean-variance optimization (MVO). In our last post on that topic, we simulated 1,000 60-month (5-year) return series using the 1987-1991 period for our four assets: stocks, bonds, commodities (gold), and real estate. We then iterated through the samples using weights derived from the naive portfolio, the satisficing algorithm1, and the maximum Sharpe ratio portfolio on the previous sample to create portfolios on the next sample.
For fundamental equity investors, the financial statement is the launchpad for the search for value. True, quants use financial statements too. But they spend less time on what the numbers mean, than on what they are. To produce a financial statement that adequately captures the economic (not GAAP or IFRS) position of a company is no mean feet and draws upon accounting, domain knowledge, and artistry. Data scientists and machine learning engineers are more than acutely aware of the chore of data processing and cleaning.
It’s been over a month since our last post and for that we must apologize. We endeavor to be more prolific, but sometimes work and life get in the way. On the work front, let’s just say we won’t have to spend as much time selling encyclopedias door-to-door, which should free up more time to dedicate to writing value-added blog posts. On the life front, we had the chance to hike several canyons in southern Utah, USA.
Our last post examined the correspondence between a logistic regression and a simple neural network using a sigmoid activation function. The downside with such models is that they only produce binary outcomes. While we argued (not very forcefully) that if investing is about assessing the probability of achieving an attractive risk-adjusted return, then it makes sense to model investment decisions as probability functions. Moreover, most practitioners would probably prefer to know whether next month’s return is likely to be positive and how confident they should be in that prediction.
In our last post, we introduced neural networks and formulated some of the questions we want to explore over this series. We explained the underlying architecture, the basics of the algorithm, and showed how a simple neural network could approximate the results and parameters of a linear regression. In this post, we’ll show how a neural network can also approximate a logistic regression and extend our toy example.
What’s the motivation behind showing the link with logistic regression?