The Long View: Three Fallacies of Risk Parity

In a space where innovation is kept at bay, and orthodoxy and simplicity are appreciated by investors and encouraged by the regulators, risk parity as a concept has made remarkable inroads. Neither modern portfolio theory, nor asset-liability management or even liability-driven investing has had a comparable pace of adoption, and understandably so.
To begin with, funds following risk parity in their asset allocation have done reasonably well in the past, and although market regulators stipulate a disclaimer that past performance is no indication of the future, it only applies to private investors, while the sophisticated institutional investors may choose to believe that past track record deserves the benefit of doubt.
Of course, there is also the strong fundamental underpinning of risk parity: we cannot comfortably predict asset returns (plenty of evidence of that), and since different asset classes have different degrees of volatility, why not allocate funds in a way that they contribute the same amount of risk. Although, in principle, the resulting allocation is not optimal in the modern portfolio theory (MPT) sense, this is not an impediment, since none of the dominant pension fund asset allocation methodologies is optimal in the MPT sense.
The fact that risk parity manages to avoid return assumptions is indeed a very strong argument in favor of the methodology. Pension funds are all too familiar with more or less arbitrary return assumptions used in the conventional ALM processes. So the combination of sound fundamental basis and a good track record ought to convince and silence any skeptic, were it not for the three theoretical fallacies at the core of risk parity, which lead to one practical and very serious hurdle.
Fallacy 1: Volatility is not identical to risk. It is sensible to allocate funds in a way that they contribute the same amount to risk. In practice, volatility is used as the single measure of asset class risk, with an unintended but significant consequence: it makes asset allocation pro-cyclical. In such a setting, a core asset class, such as equities, will have a higher allocation when volatility is low and lower allocation when volatility is high. Of course, high volatility is observed in conjunction with sharply lower equity markets, while low volatility corresponds to higher equity markets. This means that a fund following risk parity approach is forced to buy equities expensive and sell cheap.
A counterargument here is that risk parity funds rebalance asset exposures very quickly, thus being “ahead of the volatility spikes”. But this begs the question whether the process is significantly different from momentum trading.
Fallacy 2: Leverage changes the distribution of asset returns. We may be, albeit reluctantly, persuaded to accept volatility as a single measure of risk. However, the next step in the process is to apply leverage to the fixed income portfolio in order to be able to achieve equity-like contribution to volatility. But the very fact of applying leverage changes the shape of distribution and introduces fat tails, which renders the already shaky use of volatility even more questionable. To put it simply, when we introduce leverage to the fixed income portfolio and achieve risk parity, in reality we have achieved volatility parity, not risk (of loss) parity. In fact, the more we try to achieve volatility parity, the farther removed we are from risk parity.
Fallacy 3: The relationship between asset returns and inflation is complex and unstable. Certain interpretations of risk parity imply asset allocation that achieves equal contribution of macroeconomic risks, i.e. growth and inflation. Asset classes are bundled together into groups that have similar behavior in various growth and inflation environments. The unit of risk measurement is still the volatility.  However, the main added assumption is that various asset classes have clear and unambiguous return patterns in various macroeconomic environments.
There is very little evidence of such clear patterns in practice. Although, theoretically, asset returns (of any asset) should hedge against inflation, in many countries the relationship between inflation and equity prices is negative[1], while in the UK, for instance, this relationship was positive. To complicate things even further, Hoesli, Lizieri & MacGregor[2], armed with more complex econometric models, have shown that in principle, asset returns are negatively correlated with short-term unexpected inflation, but positively correlated with long-term expected inflation. Overlaying this insight over the short-term trading portfolio of a typical risk parity fund one may conclude that the portfolio may be appropriate for the long-term but suffer in the short-term or the other way round. Clearly, there is very little conclusive relationship to build a robust process on.
In the end, a lot of theory above is just that – a competition between various incomplete abstract and autistic models that are inevitably too far from reality. Investors should not really care about these inconsistencies so long as the resulting portfolio appears to be sensible. Unfortunately, this latter hurdle proves to be the most severe for risk parity advocates: armed with decades of experience in financial markets, investors are reluctant to commit significant funds to fixed income assets. This is perhaps the simplest, yet the strongest argument against risk parity: building leveraged positions in assets that have done so well for so long and are backed by near insolvent entities seems just not sensible enough. Never mind the theory.


[1] Lintner, J. (1975). “Inflation and Security Returns,” Journal of Finance 30(2), 259-280, Firth, M. (1979). “The Relationship Between Stock Market Returns and Rates of
Inflation,” Journal of Finance 34(3), 743-749.
[2] Hoesli M., Lizieri C. & MacGregor B. “The Inflation Hedging Characteristics of U.S. and U.K. Investments: a Multi-Factor Error Correction Approach”  Swiss Finance Institute Research Paper Series N°06 – 4

Zeroing in on the Structural Break

Despite what historical volatility levels suggest there is a significant structural break in equity markets that is not gauged using standard financial mathematics.
Emotions and intuition should not be part of the professional capital market environment. This much has been pointed out and agreed upon by market participants and academics alike. And yet we were given that mysterious shortcut generating machine by Mother Nature for a purpose. Is not that purpose to protect us from our seemingly rational reasoning?
The analysis involving standard measures is guilty of the most common mistake in finance: using a mathematical expression without defining what it is meant to measure. The true amplitude of index change in a unit of time in the past decade, which volatility apparently fails to capture, is the highest ever recorded in history, or at least for the 140 years of available data.
Download the report

Introducing Asset Allocation Resilient to Systemic Risks

This white paper briefly examines the current standard asset allocation practice and introduces asset allocation methodology that incorporates Graham and Network Risks. Resilient asset allocation balances ambition to generate returns with the awareness of systemic risks. The results are compared with most common fixed-weight asset mixes as well as asset allocation implied by the risk parity approach.
In the past number of years LINKS have introduced and used network- and value-based risk management frameworks in order to gauge systemic risks inherent in institutional portfolios. Given the multiple layers of legacy asset allocation processes in place, implementation of LINKS risk frameworks has always been an add-on and ad-hoc exercise for institutional investors. However, as the methodology matures, there is more interest in a systematic asset allocation methodology that is based on LINKS risk frameworks.
Download the document

Data- or Analysis-oriented Risk Management?

Modern risk management practice has embraced data with unreserved enthusiasm. Risk management software providers compete based on the number of entries their databases contain. Only recently, one of the prominent firms in the space came out with a white paper on data-oriented systems, vaguely alluding to being “more accurate and precise” in their risk measurements. Incidentally, that firm is the record holder of the number of entries: over three million instruments. Since this number was not sufficient to help their clients survive the 2008 crisis, it would have to be augmented by the three million and first. Or would it?
While computers are very good at holding data in relational databases, they are rubbish at dealing with information. In fact, humans are hugely superior to computers when it comes to holding and manipulating information. Unfortunately, it is neither simple nor easy, and there is no guarantee of control of any sorts. Nevertheless, in the post-variance risk management world it is precisely this ability to work with information and not data that helps us draw conclusions and make decisions about the wild randomness in the markets. So how would we go about designing a risk process based on information?

 
 
Running a risk management function without identifying the dominant sources of global risk is akin running a military campaign without having a clue about who the adversary is. Global imbalances, excessive valuations or “bubbles” are a good place to start. This can be carried out on an ad-hoc basis, or methodically. The latter approach using Graham Risk measure is adopted by LINKS Analytics. A typical example of key global risk source is the infrastructure spending in China.
Once the biggest sources of global risk are identified, we can map the transmission pathways to other geographies, sectors and asset classes. The process is as simple as following the economic relationships between parties. A weakness in the Chinese infrastructure sector, for instance, would result in lower demand for materials, such as cold-rolled steel, cement, energy; municipal revenues from land sales will fall, which will put pressure on central government finances, etc. Quantifying these relationships is key to unlocking the potential risks to various asset classes driven by the network effects.
Estimating the impact of risk sources on asset returns is not difficult, since it is the direction of impact that is most informative and not the magnitude. Finally, although there are many imbalances in the global economy at any given point, only few of them pose an immediate threat. Therefore, the level of threat should be assessed dynamically. The ultimate goal of the risk process is to be prepared to implement dynamic hedging strategies or trigger de-risking in parts of the portfolio, should the level of threat be deemed sufficiently different from the past.
While the risk process described in this article is relatively simple and not too data intensive, it requires a risk function that is focused on the external market environment instead of reporting; and information and analysis instead of data. A risk analyst in this setting would require a qualification in business analysis and economics as opposed to SQL and quantitative methods.
 

Three Reasons for Failing Stress Tests

Sometimes the absurdity of our analytical tools is so obvious that it cannot be spotted without stepping back and having a look at the whole picture. Stress tests as analytical tool are a case in point. On the surface, they seem to complement the standard risk management tools and introduce a degree of “real world” control over the outcome. In practice due to the three major problems in the design of stress tests they end up feeling like one of those primary school math exercises gone wrong: after a long set of calculations you conclude that x = x!

Problem 1: How big a stress?

If we knew beforehand how large the risks are, why would we need stress testing in the first place? It is always possible to come up with a magnitude of stress that can imply an unacceptable level of loss – our limit is the imagination. Since we cannot accurately estimate the likelihood of these scenarios, there is no way of telling which magnitude of stress is the appropriate one. There was an old game: two players think of a number each, the first one tells his number to the second one. The second one then reveals his number, and if it is larger, he wins. Guess who’s the winner in every turn?

Problem 2: What to stress?

Interestingly, this issue pops up in the process of sorting out the Lehman mess. This is a passage from the examiner report in the Chapter 11 proceedings of Lehman: “One of Lehman’s major risk controls was stress testing. …One stress test posited maximum potential losses of $9.4 billion, including $7.4 billion in losses on the previously excluded real estate and private equity positions, and only $2 billion on the previously included trading positions. …But these stress tests were conducted long after these assets had been acquired, and they were never shared with Lehman’s senior management”. Since the scenarios and magnitudes are arbitrary, it is hard to imagine that there were no occasional catastrophic outcomes of analysis. In retrospect, it is always possible to find the scenario which was catastrophic and was not shared with the management.
Often existing bubbles in the global economy can be spotted by the number of IPOs, size of the bonus pool and the opening of dedicated desks by the banks. In many instances, however, this is not the case. While we all know the well publicized imbalances in the economy (China-commodities-trade –Fed etc.), which asset class will be hurt first (or at all) is hard to know. Every crisis in the past hundred years has had unique triggers, mechanisms and consequences. They all had one thing in common – people lost money.

Problem 3: What is the impact?

Last but not least, the most severe problem of all: what is the impact of a stress? A combination of the network effect in action and a complex set of non-linear relationships and tipping points render the whole exercise superficial. Correlations and linear relationships at best describe a world of marginal changes. A stress test, on the other hand, assumes an unusually large change, which by definition triggers unusual reactions.