Comment: Risk modellers take flak

Nov 24, 2010

The following article was published in Reactions on November 24, 2010: 

Comment:  Risk modellers take flak

www.reactionsnet.com 

A scathing article that appeared in a Florida newspaper recently underlines that the market needs to stop taking risk models’ output as gospel, says Reactions’ editor Michael Loney.

Risk modelling firm Risk Management Solutions (RMS) was the subject of a scathing article in Florida’s Herald Tribune in November. The article said that following Hurricane Katrina in 2005 RMS aggressively pushed a new short-term model.

The lurid article, headlined “Florida insurers rely on dubious storm model”, stated: “RMS, a multimillion-dollar company that helps insurers estimate hurricane losses and other risks, brought four-hand picked scientists together in a Bermuda hotel room. There, on a Saturday in October 2005, the company gathered the justification it needed to rewrite hurricane risk. Instead of using 120 years of history to calculate the average number of storms each year, RMS used the scientists’ work as the basis for a new crystal ball, a computer model that would estimate storms for the next five years. The change created an $82bn gap between the money insurers had and what they needed, a hole they spent the next five years trying to fill with rate increases and policy cancellations.”

The article goes on to say that some hurricane specialists are now sceptical of RMS’ claimed “scientific consensus” for its new model. “Today, two of the four scientists present that day no longer support the hurricane estimates they helped generate,” the article says. “Neither do two other scientists involved in later revisions. One says that monkeys could do as well.”

The article adds that as a result of RMS’s model change the cost to insure a home in parts of Florida hit world-record levels.

(Full disclosure – Reactions and RMS share an ultimate parent in the Daily Mail & General Trust).

A number of points should be made in response to this article. Firstly, it is not risk modellers that set rates – insurance and reinsurance firms do that. Secondly, no one is forcing the reinsurance and reinsurance firms to use the RMS model. It is only one of several modelling firms. Thirdly, all the models were way off on Katrina, so the loss estimates were always going to increase (the question was by how much). Lastly, it is ludicrous to blame all of the rate increases after hurricane Katrina on a model change. Capital was depleted – the laws of demand and supply kicked in, pushing up pricing. The effects of model changes came on top of that.

The problem for RMS is that no big events have struck since Katrina. The higher losses its new model predicted never came, making it look like the near-term model was not needed.

RMS responded in a letter to the editor of the Herald Tribune. It, not unreasonably, pointed out that the models merely deliver probabilistic forecasts not deterministic predictions. It also said that there is a widespread agreement among scientists that the number of North Atlantic hurricanes has increased since the 1970s and that since 1995 activity has been “significantly higher than the long-term average since 1900”. The question, RMS said, is how much higher is the frequency and how will it impact hurricanes making landfall in the US?

RMS said: “The scientists were deliberately kept at a distance from the commercial implications of the recommendations. In our annual review of medium-term activity rates (the next five years), we have worked with a total of 17 leading experts, representing a broad spectrum of opinions.”

It added: “There is no commercial advantage for us to overstate the risk.”

But the article does raise an important issue – the over-reliance of the industry on models. The Herald Tribune article stated: “RMS continues to promote its short-term model as the preferred option for its customers. A survey by Bermuda officials shows it is the dominant model for Bermuda reinsurers, the most crucial source of private hurricane protection for Florida.”

This is further evidence that insurers and reinsurers need to question what the models tell them. This is not the modellers’ fault. The models will never be perfect and they will always be wrong. Ironically, the models normally come in too low.

But equally, modellers should be careful not to overstate the accuracy of their models also. Karen Clark, CEO of risk modelling consulting firm Karen Clark & Company and founder of AIR Worldwide, an RMS rival, believes this is a problem. “I thought it was a pretty good article,” Clark told me after the Herald Tribune article came out. “I agree with the gist of what it was saying.”

Clark says three things were on the minds of insurance companies after Katrina: the industry had just had two years in a row of big cat losses; there was a sense that all models had been too low on Katrina; and there was a tremendous amount of press around Katrina, some even saying Katrina was caused by climate change.

“There was pressure to come out with models,” says Clark. “What happened is RMS came out with a near-term model that said over the next five years hurricane frequency would increase by 40%. That is fine if there is some science behind it to support an increase in frequency but quite another thing to say, OK, we throw out the model we have been using and move to the near-term one.”

Clark says too much emphasis was placed on the near-term model by RMS.

“They were kind of oversold,” says Clark. “It was money to real people. A small insurer may have to buy a lot more reinsurance and not be able to afford it. So the question is: is the science enough to justify it? A handful of scientists came up with that number and they never really had that use in mind for it.”

Clark says modellers overemphase the outcomes of their models, which can give answers down to the cent. She says the data behind the models is simply too uncertain to be that precise, and that a range should be provided instead.

“As humans we fundamentally don’t like uncertainty,” she says. “CEOs value a number. The problem is that the modellers do oversell the answer the models give. When models change it is just new research changing the number. So we have to change that; we have got to start talking about the unknown, rather than the known. We need to move to recognising the uncertainty, both as modellers and as users.”

In short, the market needs to stop taking the models’ output as gospel and modellers need to stop acting as if their models are more than an educated guess.

Some executives have also urged caution when using the models. Former Hannover Re CEO Wilhelm Zeller was fond of using the phrase, ‘A fool with a tool is still a fool,’ in reference to risk models. Tad Montross, CEO of Gen Re, has also highlighted models shortcomings while adding that they are still the best thing the market has to help assess its risk.

Writing in our CEO Risk Forum 2010 earlier this year, Montross said: “Probabilistic cat models have been around for 30 years and have brought much greater discipline and focus to the management and quantification of catastrophe exposures. This has been a positive development for the industry. Having said that, the actual track records of the models have not been good.  The one thing we can all agree on is that the model estimates are wrong. Just last year the initial estimates for Hurricane Ike were 50% to 60% off the mark. So the actual to modelled variance can be huge – suggesting a large margin of safety is appropriate when using these tools to measure capital at risk.

“Particularly, in extreme events, the variance can be even larger since the calibration is more difficult. Why do we invest so heavily and spend so much time using cat models to measure and manage our accumulations?  Simply because while imperfect, they are the best tool we have.

“While many industry reports and analysts speak to the 1% or 0.2% loss amounts, few qualify their statements with supporting information on how the model was actually used and parameterized. The judgments, with respect to occurrence vs aggregate loss amounts, VAR vs. TVAR, storm surge, medium vs long term frequencies, loss amplification, secondary uncertainty and data quality/resolution can produce wildly different loss estimates.  In some cases the range can be twofold.  Startling, given the aura of precision the EP (exceeding probability) curves project.”