Our previous post, used hierarchical clustering to identify market regimes in the gold miners ETF, GDX. This was inspired by a post from PyQuant News that highlighted a longer article from the London Stock Exchange Group (LSEG). In this post, we’ll continue looking at identifying market regimes and using those predictions as signals for a simple trading strategy.
As noted, the LSEG article showed three different machine learning methods to segregate regimes: clustering, Gaussian Mixture Models (GMMs), and Hidden Markov Models (HMMs).
We recently saw a post from PyQuant News that piqued our interest, compelling us to dust off the old blog files and get back into the saddle. The post highlights a longer article from the London Stock Exchange Group (LSEG) on how to use different machine learning models to identify and forecast market regimes. That article uses Refinitiv, a market data service like Bloomberg, which we don’t have access to.
We’re taking a short break from neural networks to return to portfolio optimization. Our last posts in the portfolio series discussed risk-constrained optimization. Before that we examined satisificing vs. mean-variance optimization (MVO). In our last post on that topic, we simulated 1,000 60-month (5-year) return series using the 1987-1991 period for our four assets: stocks, bonds, commodities (gold), and real estate. We then iterated through the samples using weights derived from the naive portfolio, the satisficing algorithm1, and the maximum Sharpe ratio portfolio on the previous sample to create portfolios on the next sample.
For fundamental equity investors, the financial statement is the launchpad for the search for value. True, quants use financial statements too. But they spend less time on what the numbers mean, than on what they are. To produce a financial statement that adequately captures the economic (not GAAP or IFRS) position of a company is no mean feet and draws upon accounting, domain knowledge, and artistry. Data scientists and machine learning engineers are more than acutely aware of the chore of data processing and cleaning.
We’re returning to our portfolio discussion after detours into topics on the put-write index and non-linear correlations. We’ll be investigating alternative methods to analyze, quantify, and mitigate risk, including risk-constrained optimization, a topic that figures large in factor research.
The main idea is that there are certain risks one wants to bear and others one doesn’t. Do you want to be compensated for exposure to common risk factors or do you want to find and exploit unknown factors?
In our last post, we took our analysis of rolling average pairwise correlations on the constituents of the XLI ETF one step further by applying kernel regressions to the data and comparing those results with linear regressions. Using a cross-validation approach to analyze prediction error and overfitting potential, we found that kernel regressions saw average error increase between training and validation sets, while the linear models saw it decrease. We reasoned that the decrease was due to the idiosyncrasies of the time series data: models trained on volatile markets, validating on less choppy ones.
In our last post, we ran simulations on our 1,000 randomly generated return scenarios to compare the average and risk-adjusted return for satisfactory, naive, and mean-variance optimized (MVO) maximum return and maximum Sharpe ratio portfolios.1 We found that you can shoot for high returns or high risk-adjusted returns, but rarely both. Assuming no major change in the underlying average returns and risk, choosing the efficient high return or high risk-adjusted return portfolio generally leads to similar performance a majority of the time in out-of-sample simulations.
Over the past few weeks, we’ve examined the three major methods used to set return expectations as part of the portfolio allocation process. Those methods were historical averages, discounted cash flow models, and risk premia models. Today, we’ll bring all these models together to compare and contrast their accuracy.
Before we make these comparisons, we want to remind readers that we’re now including a python version of the code we use to produce our analyses and graphs.
In our last post, we applied machine learning to the Capital Aset Pricing Model (CAPM) to try to predict future returns for the S&P 500. This analysis was part of our overall project to analyze the various methods to set return expectations when seeking to build a satisfactory portfolio. Others include historical averages and discounted cash flow models we have discussed in prior posts. Our provisional analysis suggested that the CAPM wasn’t a great forecasting model.
Over the last few posts, we’ve discussed methods to set return expectations to construct a satisfactory portfolio. These methods are historical averages, discounted cash flow models, and risk premia. our last post, focused on the third method: risk premia. Using the Capital Asset Pricing Model (CAPM) one can derive the required return for a particular asset based on the market price of risk, the asset’s risk, and the asset’s correlation with the market.