Quantitative Investing

Hidden miners

We conclude our discussion of market regime detection by examining Hidden Markov Models (HMMs). Recall this series was inspired by a post from PyQuant News that highlighted a longer article from the London Stock Exchange Group (LSEG). Those who took the CFA exams probably forgot using HMMs in the quant section. Whatever the case, the intuition behind them is clever. HMMs use observable data to infer non-observable data, or hidden states.

Gaussian gold

Our previous post, used hierarchical clustering to identify market regimes in the gold miners ETF, GDX. This was inspired by a post from PyQuant News that highlighted a longer article from the London Stock Exchange Group (LSEG). In this post, we’ll continue looking at identifying market regimes and using those predictions as signals for a simple trading strategy. As noted, the LSEG article showed three different machine learning methods to segregate regimes: clustering, Gaussian Mixture Models (GMMs), and Hidden Markov Models (HMMs).

Golden clusters

We recently saw a post from PyQuant News that piqued our interest, compelling us to dust off the old blog files and get back into the saddle. The post highlights a longer article from the London Stock Exchange Group (LSEG) on how to use different machine learning models to identify and forecast market regimes. That article uses Refinitiv, a market data service like Bloomberg, which we don’t have access to.

One-N against the world!

We’re taking a short break from neural networks to return to portfolio optimization. Our last posts in the portfolio series discussed risk-constrained optimization. Before that we examined satisificing vs. mean-variance optimization (MVO). In our last post on that topic, we simulated 1,000 60-month (5-year) return series using the 1987-1991 period for our four assets: stocks, bonds, commodities (gold), and real estate. We then iterated through the samples using weights derived from the naive portfolio, the satisficing algorithm1, and the maximum Sharpe ratio portfolio on the previous sample to create portfolios on the next sample.

Not so soft softmax

Our last post examined the correspondence between a logistic regression and a simple neural network using a sigmoid activation function. The downside with such models is that they only produce binary outcomes. While we argued (not very forcefully) that if investing is about assessing the probability of achieving an attractive risk-adjusted return, then it makes sense to model investment decisions as probability functions. Moreover, most practitioners would probably prefer to know whether next month’s return is likely to be positive and how confident they should be in that prediction.

Nothing but (neural) net

We start a new series on neural networks and deep learning. Neural networks and their use in finance are not new. But are still only a fraction of the research output. A recent Google scholar search found only 6% of the articles on stock price price forecasting discussed neural networks.1 Artificial neural networks, as they were first called, have been around since the 1940s. But development was slow until at least the 1990s when computing power rapidly increased.

Risk-constrained optimization

Our last post parsed portfolio optimization outputs and examined some of the nuances around the efficient frontier. We noted that when you start building portfolios with a large number of assets, brute force simulation can miss the optimal weighting scheme for a given return or risk profile. While optimization finds those weights (it should!), the output can lead to infinitesimal contributions from many assets, which is impractical or silly. Placing a minimum on the weights helps a bit.

More factors, more variance...explained

Risk factor models are at the core of quantitative investing. We’ve been exploring their application within our portfolio series to see if we could create such a model to quantify risk better than using a simplistic volatility measure. That is, given our four portfolios (Satisfactory, Naive, Max Sharpe, and Max Return) can we identify a set of factors that explain each portfolio’s variance relatively well? In our first investigation, we used the classic Fama-French (F-F) three factor model plus momentum.

Macro variance

In our last post, we looked at using a risk factor model to identify potential sources of variance for our 30,000 portfolio simulations. We introduced the process with a view ultimately to construct a model that could help to quantify, and thus mitigate, sources of risk beyond a simplistic volatility measure. In this post, we’ll look at building a factor model based on macroeconomic variables to see if such a model does what it says on the tin.

Explaining variance

We’re returning to our portfolio discussion after detours into topics on the put-write index and non-linear correlations. We’ll be investigating alternative methods to analyze, quantify, and mitigate risk, including risk-constrained optimization, a topic that figures large in factor research. The main idea is that there are certain risks one wants to bear and others one doesn’t. Do you want to be compensated for exposure to common risk factors or do you want to find and exploit unknown factors?