Proper Risk Tools Can Lead To Improved Returns, Riskdata Study Says

A Riskdata study of hedge fund performance during the recent credit crisis shows that using a non linear, factor based model, it would have been possible prior to the crisis to reduce the proportion of hedge fund "time bombs" those

By None

A Riskdata study of hedge fund performance during the recent credit crisis shows that using a non-linear, factor-based model, it would have been possible prior to the crisis to reduce the proportion of hedge fund “time bombs” – those managers who have simply been lucky rather than skilled in their past performance. Using such an approach would have led an investor in a broad hedge fund portfolio to achieve a 4% return over the nine months period ending in March 2008, compared with a 0% return for the hedge fund universe as a whole.

Since summer 2007, a major hedge fund crisis has been triggered by a variety of exploding time bombs – in other words, a massive drawdown on hedge fund managers, who had apparently nothing in their track records prior to June 2007 that could have suggested any potential high risk.

The Riskdata study tackles the question of whether it is possible to detect funds that would not fare well under extreme conditions well in advance of market stress. The study looked at the performance from July 2007 to March 2008 of 3,200 hedge funds and fund of hedge funds, which report returns to HFR Database (Hedge Fund Research, Inc.), and had a track record covering at least the period from December 2004 to January 2008. The benchmark portfolio on which the study is based is iso-weighted on these 3,200 funds. The study broke down the funds into three groups:

Group A – 389 funds (12%) for which the crisis was business as usual; they earned an average return of +8.7% in the period;

Group B – 2,098 funds (65%) that experienced very high drawdowns (an event which would normally occur only once every 8 years using normal distribution standards), but where this was in line with what they experienced prior to the crisis in terms of extreme risk. Investors thus had no reason to be surprised with this performance, given market conditions. The average performance of this group was +1.7%;

Group C – 729 funds (23%) that experienced not only high drawdowns compared to their normal distribution models, but also compared to previous maximum drawdowns (more than twice past drawdowns). In other words, nothing in these funds’ track records could have alerted an investor to a high level of loss. This group, on average, returned a negative -9.4% performance. Unsurprisingly, the highest proportion of “time bomb explosion” have been observed among credit related and relative value strategies (40% of exploded time bombs), while this proportion was much smaller within CTA, short bias and macro funds.

According to Riskdata, there are two principal approaches to measuring such risks, and Riskdata tested how each fared over the crisis period.

One approach is to use a sophisticated return-based risk model, which involves staying away from funds seen to have an abnormal return distribution. This means avoiding funds that have a high extreme risk compared to its “usual business” risk. Using return-based risk models results in the effective elimination of extreme risk takers from the portfolio, but the benefit of this elimination is largely offset by the fact that it also eliminates _successful_ risk takers, and completely fails in detecting time bombs with hidden risk. This approach would have resulted in a 0.4% return over the July-March period, not much improvement on the benchmark portfolio return of 0%.

The alternative is to use a non-linear, factor-based model, which focuses on modeling the relationship between market factors and manager returns. This approach seeks to detect time bombs by identifying funds whose predicted risk, based on the long-term risk of underlying factors, is significantly higher than the observed maximum drawdown. Using this criterion, an investor can eliminate funds where /predicted/ extreme risk (using all factor history) is more than twice than the observed past maximum drawdown, or 2.3 times the fund’s volatility (which corresponds to the 99th percentile of a normal distribution). Using a non-linear, factor-based model- the method that forms the core approach of Riskdata’s FOFiX and HEDGiX risk management tools – an investor would have avoided most potential time bombs, while keeping successful risk takers in the portfolio. As a result, the investor would have over-performed versus the benchmark by 4%.

Raphael Douady, Research Director of Riskdata says: “This study demonstrates that pure return-based models, however sophisticated, are insufficient to support sound risk budgeting. Such models can help reduce risk levels, but do not reduce ‘hidden’ risk nor help find ‘good’ risk. An efficient non-linear, factor-based model is the only approach that can help investors discriminate between ‘lucky’ managers and the talented ones. The major advantage of factor-based models versus return-based ones is that they draw on the long term history of market factors, including crises, even if funds have a short history. Non-linearity is a key feature to capture the correlation breaks that occur during crises and liquidity traps.”

«