Quantitative Investing

Explaining variance

We’re returning to our portfolio discussion after detours into topics on the put-write index and non-linear correlations. We’ll be investigating alternative methods to analyze, quantify, and mitigate risk, including risk-constrained optimization, a topic that figures large in factor research. The main idea is that there are certain risks one wants to bear and others one doesn’t. Do you want to be compensated for exposure to common risk factors or do you want to find and exploit unknown factors?

Round about the kernel

In our last post, we took our analysis of rolling average pairwise correlations on the constituents of the XLI ETF one step further by applying kernel regressions to the data and comparing those results with linear regressions. Using a cross-validation approach to analyze prediction error and overfitting potential, we found that kernel regressions saw average error increase between training and validation sets, while the linear models saw it decrease. We reasoned that the decrease was due to the idiosyncrasies of the time series data: models trained on volatile markets, validating on less choppy ones.

Kernel of error

In our last post, we looked at a rolling average of pairwise correlations for the constituents of XLI, an ETF that tracks the industrials sector of the S&P 500. We found that spikes in the three-month average coincided with declines in the underlying index. There was some graphical evidence of a correlation between the three-month average and forward three-month returns. However, a linear model didn’t do a great job of explaining the relationship given its relatively high error rate and unstable variability.

Corr-correlation

We recently read two blog posts from Robot Wealth and FOSS Trading on calculating rolling pairwise correlations for the constituents of an S&P 500 sector index. Both posts were very interesting and offered informative ways to solve the problem using different packages in R: tidyverse or xts. We’ll use those posts as a launchpad to explore the rolling correlation concept with respect to forecasting returns. But we’ll be using Python to do a lot of the heavy lifting.

Writing conundrums

We’re taking a break from our portfolio series and million sample simulations to return to a subject that we haven’t discussed of late despite its featured spot in this blog’s name—options. In this post, we’ll look at the buy-write (BXM) and put-write (PUT) indices on the S&P 500, as conceived, calculated, and published by the CBOE. Note: we’ve discussed the buy-write strategy in the past here and here. In those posts, we analyzed the performance of the buy-write relative to its underlying index, the S&P 500.

GARCHery

In our last post, we discussed using the historical average return as one method for setting capital market expectations prior to constructing a satisfactory portfolio. We glossed over setting expectations for future volatility, mainly because it is such a thorny issue. However, we read an excellent tutorial on GARCH models that inspired us at least to take a stab at it. The tutorial hails from the work of Marcelo S.

Skew who?

In our last post on the SKEW index we looked at how good the index was in pricing two standard deviation (2SD) down moves. The answer: not very. But, we conjectured that this poor performance may be due to the fact that it is more accurate at pricing larger moves, which occur with greater frequency relative to the normal distribution in the S&P. In fact, we showed that on a monthly basis, two standard deviation moves in the S&P 500 (the index underlying the SKEW) occur with approximately the same frequency as would be expected in a normal distribution.

Null hypothesis

In our previous post we ran two investing strategies based on Apple’s last twelve months price-to-earnings multiple (LTM P/E). One strategy bought Apple’s stock when its multiple dropped below 10x and sold when it rose above 20x. The other bought the stock when the 22-day moving average of the multiple crossed above the current multiple and sold when the moving average crossed below. In both cases, annualized returns weren’t much different than the benchmark buy-and-hold, but volatility was, resulting in significantly better risk-adjusted returns.

Valuation hypothesis

In our last post on valuation, we looked at whether Apple’s historical mutiples could help predict future returns. The notion was that since historic price multiples (e.g., price-to-earnings) reflect the market’s value of the company, when the multiple is low, Apple’s stock is cheap, so buying it then should produce attractive returns. However, even though the relationship between multiples and returns was significant over different time horizons, its explanatory power was pretty low.

Price is what you pay

Stock analysts are usually separated into two philosophical camps: fundamental or technical. The fundamental analyst uses financial statements, economic forecasts, industry knowledge, and valuation to guide his or her investment process. The technical analyst uses prices, charts, and a whole host of “indicators”. In reality, few stock analysts are purely fundamental or technical, usually blending a combination of the tools based on temperament, experience, and past success. Nonetheless, at the end of the day, the fundamental analyst remains most concerned with valuation, while the technical focuses on price action.