ETech: I Just Don't Trust You: How the Tech Community Can Reinvent Risk Ratings

My favorite conference of the year, ETech kicked off its general sessions today and its looking as stimulating as ever! While the topics covered by the conference have become less hard-core geeky, they have become more green and more broad. Sustainable topics, the environment and becoming better global citizens are just a few of the topics that have been struck this morning. ETech continues to make me think, which is the primary reason I keep coming back for more.

The first session I’d like to share with you was Toby Segaran and Jesper Anderson’s “I Just Don’t Trust You: How the Tech Community Can Reinvent Risk Ratings” presentation. Toby and Jesper posited that the system for rating credit instruments is horribly broken. Right before Lehman Brothers crashed, Moody’s credit rating agency gave Lehman Brothers a AAA A2 credit rating. Moodys immediately down-rated Lehman Brothers after they crashed — a little too late! Jesper and Toby outlined four reasons why the current system fails to do its job:

  1. Payments create bad ratings. NRSROs (Nationally Recognized Statistical Rating Organization), like Moody’s, S&P and Fitch all take payments to rate a given financial instrument and this incentivizes these companies to create false ratings. The structural problems that allow these conflict of interest transactions to occur beget a host of moral failures. This behavior amounts to bribery that doesn’t send anyone to jail.
  2. Opacity creates bad ratings. Credit rating agencies enjoy patent-like protection in the US, but without the requirement of public disclosure. They can operate on whatever principles they choose, which leaves their rating system nearly meaningless. What does a AAA rating mean? The current system makes an AAA rating no more effective than a gold star that might be awarded in grade-school. A closed consensus model doesn’t explore alternative options and it gives to the question if the current risk models are suitable for doing their job.
  3. Lack of ecosystem create bad ratings. Today’s market ignores good predictions only to celebrate them after the fact when the market starts crashing. We need to incorporate changing knowledge of the market into the models and constantly evolve them. Creating an ecosystem that can review financial information in broad daylight and encourage a greater accountability will create a more robust and accurate credit rating system. Its important to remove the incentives for secrecy.
  4. Single source of information creates bad ratings. A single viewpoint of financial data is too narrow to do justice in the complex market of credit ratings. Any and all sources of information need to feed into the model for the greatest chance at success. The current model encourages blind thinking, when credit ratings should focus on analyzing as many sources of data as possible.
  5. With the problems defined, Toby and Jesper propose the following requirements for a new credit rating system:

    • Ratings need to be accessible. Credit ratings need to operate from an information commons. Debt is about trust — a Digg style system won’t work here.
    • Ratings need to be open. Everyone needs to be able to see the complete view of the landscape.
    • Ratings need to be diverse. Many people need to take part in credit rating. Any new system needs to be built as a Bazaar, not a Cathedral!
    • Ratings need to be transparent. Transparency is a vital component in a new system — opacity only hides the shady practices of incumbents and prevents the system from really working.

    And rather than ranting without substance, Jesper and Toby have been working on a new system at Freerisk.org that acts as a specialized front end to the FreeBase database. Based on Open Data principles, FreeBase contains public SEC records (among many other data portions, including the MusicBrainz data). FreeBase exposes the SEC records in RDF via a public API, which makes accessing SEC data much easier than before.

    Jesper and Toby invite the general public to come and take part in this nascent project — they hypothesize that by exposing the data to more people, anyone can create a risk calculator. And of course, if everyone tried to create a new risk calculator, we’re bound to find new models that work better than the current models that allows financial catastrophes to happen. They conclude: “With a little help, we believe that we can beat the NRSROs!”

    If the current financial crisis bothers you and you feel angry enough to help look for a solution, go visit freerisk.org. Thanks for the informative talk, Toby and Jesper!

tags: ,
  • Falafulu Fisi

    Some financial analysts have blamed Moody’s & other credit rating agencies for their system’s over-reliance on using Monte-Carlo based methods for risk management, which eventually didn’t assess the risks properly.

  • Falafulu Fisi

    Robert Kaye, note the comment by Nassim Taleb quoted by Toby Segaran:

    Nassim Taleb:
    “How many of you believe that Sharpe ratio is bull**? How many of you use portfolio theory, optimal portfolio, and all that cr*p?…If someone starts talking about Sharpe, VaR, and all that, throw them out of your office.”

    I agree that financial risk management systems have been over-sold/hyped-up by the financial industries, but that doesn’t mean that these methods are crap as indicated by Toby Segaran.

    I believe that Toby Segaran has no experienced in the domain of computational finance and I am surprised that a good author like him view financial engineering in this manner as a put down. The Modern Portfolio Theory (MPT) is an undisputed method that is in use today by analysts and although the original model had problems it has improved over the years with new methods that have been made available in the literatures. The inventors of MPT (Sharpe, Makowitz, et al), shared the Economic Nobel Prize of 1990.

    These perfomance metrics (VaR, Sharpe, MPT, etc…) are not crap. Nothing in real-life is certain. Models are not 100% correct, because if they are, then there will be no research on financial & economic theory since we know everything. Models are approximations only and not the ultimate representative of reality.

    ETech organizers should have invited some quant guys there talk to about what was the cause of the financial meltdown. Toby Segaran is a machine learn expert/researcher which is a completely different domain than computational finance (financial engineering), although machine learning methods have been applied in Technical analysis, it is still a different domain.

  • SandipSen

    This is a “Emperor’s new clothes” call. Credit Rating agencies & Auditors are still holy cow.However nobody still dares to touch credit rating agencies or their methodology adopted . The truth is that there is no easy risk measurement and management techniques today.To counter this deficiency for mega projects we are using a variance analysis methodology (Refer The Project Management Time Cycle : Time Cycle Module: TCM ISBN 1440493332 available at amazon.)

    This enables day to day monitoring of data, risk analysis and measurement, and risk mitigation in projects. It is not difficult to transfer these concepts for the Risk Management of the Banking sector with requisite changes in data collected

    Sandip

  • Stephen Henderson

    This is an interesting idea that has some merit– but I think you have to be careful here not to move from one extreme to another. There is a closed system relying on ‘experts’ and then there is the ‘madness of crowds’.

    One might view the current financial catastrophe as a crisis of obfuscation and a lack of transparency (by the ratings agencies)– on the other hand you can view the problem as an asset price bubble created by consensual thinking (by the markets).

    Actually the markets are swamped by new information and data about companies every day- and often react instantly and emotionally. This might be just another addition to this herd like moodswings– most hedge funds will analyse this data already.

    My own opinion is not that investment banks have failed due to the business they have been doing off-balance sheet– i.e. not stuff in the SEC reports. This is essentially Enronisation of investment banking. Eliminate this shadow banking and then worry about the ratings model later.

  • stephen Henderson

    sorry that should read:

    My own opinionis that investment banks have failed due to the business they have been doing off-balance sheet….

  • http://www.snee.com/bobdc.blog Bob DuCharme

    It would be very interesting to see Moody’s give anyone an AAA rating, because that’s a designation from the ratings scale of S&P, their chief competitor.

    Before focusing on what is hidden at the ratings agencies, it might be a good idea to get the public parts of what they do straight.

  • http://blog.kiwitobes.com Toby Segaran

    Just a few clarifications –

    Falafulu: Although we opened the abstract with the Taleb quote, the talk was not about VaR or the quality of specific risk-modeling techniques. It was about lack of transparency in models used and infrastructure ideas for opening up risk modeling. Also, if “some quant guys” had submitted an abstract about the crisis, there’s a good chance they would have been invited.

    Bob: There’s actually a mistake in the writeup, we stated that Lehman had an “A2″ rating from Moody’s. However, your statement is actually incorrect, Moody’s does give “Aaa” ratings — see http://en.wikipedia.org/wiki/Moody%27s and the citation on that page.

  • http://i2pi.com Joshua Reich

    As a quant guy I have to say that Toby & Jesper’s work is certainly in the right direction.

    It is a shame that that the Money:Tech conference was cancelled this year as it would have been a great venue for discussing these issues.

    My primary take on the credit rating mess, and I’ll solely focus on MBS’s – but the arguments apply across the board, is that the core of credit rating technology dates back to the 1970s. While physical computing technology and mathematical techniques have updated, the underlying methods have not.

    The entire point of the MBS market was to take uncorrelated risks and package them together to minimize the impact of non-systemic risk. The other point was to create fungible instruments. An individual whole mortgage is not fungible as there are too many parameters to allow for liquidity behind bid/asks. By pooling together loans and creating singular tradeable entities markets can attain depth and liquidity. Trading ensues.

    However, people forget that behind these instruments lie whole loans, that while for ‘most of the time’ are dominated by idiopathic risks, can suffer from correlated systemic risks. Much of the mathematics that is used focuses on the system portions of the risk – e.g., prepayment risk as it relates to prevailing interest rates. Quants forgot about the real world and got jacked up on copulae and completely ignored the individuality of the component loans. For a toned down version of Taleb, read Paul Wilmott’s thoughts on this..

    And even worse, if you read Frank Portnoy’s papers on the types of correlation models used inside the CRAs – you will see fundamentally stupid assumptions.

    My point is that with current computer technology we can build new market designs that support liquidity of what are currently considered unfungible instruments. Look at weatherbill. Look at the work of Robin Hanson in combinatorial markets. New mechanisms allow us to escape the desire to model risk with a single number.

    Yes, modern portfolio theory is awesome. But look at the assumptions. We are not homogeneous traders. We have our own risk appetites and world views. We should be able to take the entirety of the data that is available on risky instruments and draw our own conclusions of the maturity, convexity, prepayment risks, etc. with regard to our own cash flow needs and investment goals. And new market designs will allow us to find instruments that best allow us to take positions in line with our own situations to maximize our exposure to our own worldviews.

    I do think technologists (as one myself) often take the viewpoint that there is 100% truth in data, and that if only we had all the data, we would be able to predict the future. The future is messy, fat-tailed and dangerous to predict but we can make great strides if we admit to this and stop trying to package ratings as a single one-size-fits-all number.

    /rant

    NB: Let me plug myself a bit here. For a related discussion, see my post Data Trades Inversely to Liquidity: http://blog.i2pi.com/index.cgi?p=186

  • http://bexhuff.com bex

    Freerisk won’t ever work…

    If this system ever got off the ground, then it would just be one more attack vector for folks to game the system. CEOs always keep some stuff off the books to inflate the apparent value of their company. A system like this will only point them to exactly how to cheat.

    Perhaps there needs to be a disincentive for the ratings agencies to cheat? Perhaps there should be laws that AAA bonds can only default 1% of the time… Anybody can be a “ratings agency,” and they can take as many bribes as they want… but if they fall below 1% then they can be sued to oblivion, and may never work in the financial industry again.

    I say, let there be 50 ratings agencies, and give them an incentive to take each other out.

  • http://i2pi.com Joshua Reich

    (I guess my previous comment got lost in the system. Maybe someone can bail it out)

    First up, congrats Toby & Jesper on tackling this issue. It is a shame that the Money:Tech conference was cancelled this year as that would be the perfect venue to address a good mix of quants and techs and spark some serious discussion. Barring that, I wanted to chime in with my own 2c as a techy/quant guy.

    In my previous life I did a fair bit of equity research on the ratings agencies, and the views here are mine and probably not shared by others – even Roubini thought I was nuts… I have no positions in any CRAs right now.

    While the talk focussed on corporate bond ratings, the largest growth area for these agencies was in structured finance. And much of this was mortgage backed securities and similar derivatives. So to understand the mess we are in now we need to look at the history of these instruments.

    MBS’s were born for two key reasons. First off was the realization that in ‘normal’ times the dominant risks were idiosyncratic in nature and as such could be minimized through the application of portfolio theory and diversification, leading to pooled entities with smoothed cash flows and tranches providing for the needs of various risk profiles. In my opinion this story is primarily the sizzle. The real steak was the fact that by aggregating together whole loans new tradeable instruments could be formed.

    The problem with whole loans was that their pricing was highly dependent on a large vector of unstandardised parameters whose diversity precluded the formation of any depth necessary to support liquidity in traditional market designs. By eliminating the idiosyncratic risk components these pooled instruments could theoretically be summarized by a small set of parameters and relatively simple models for prepayment risk meant that traders could respond to bids and asks against them.

    Faced with these simple models quants took off in a great fantastical leap and applied ever more complex techniques to model out pricing – forgetting to properly model what happens in non-normal times. For one take, see Felix Salmon’s recent piece in Wired. Or look to Paul Wilmott’s take on the ever escalating departure into a mathematical wonderland that ignored the realities of the underlying loans and their associated risks.

    Somewhere along the line practitioners forgot that the technology underpinning the frothy new market was based on 1970′s financial and computational technology. Back then a bank of associates armed with HP-12C’s could price out MBS’s using a small set of descriptive parameters.

    Over the next 20 years more computing power was thrown at the problem, but the basic data was still confined in scope. Sure, some funds were taking apart these pools and doing a deep analysis of the components, but there wasn’t much reward in doing so as the market was moving at such an upward clip.

    Even worse, if you look at the papers from Frank Partnoy, the credit ratings agencies – who were supposed to be taking a deeper look at these securities, without the demands of second by second trading – were using plainly silly assumptions. There was a huge amount of mathematical and financial stupidity going on. Not even going to mention the conflicts of interest and the regulatory arbitrage at play…

    Anyway… To address Falafulu’s point about MPT – I agree, MPT is great stuff; a very powerful framework by which to understand finance. But just look at the assumptions. Sure, these assumptions make the math tractable, but modern computing power enables us to take a more nuanced view of the world. We no longer have to rely on single parameters of ‘default risk’ to price these instruments. The market would be far better served if all available data for the underlying components and use their own information about their own risk profile to come up with better measures of value. Just compare David Einhorn’s spreadsheet with a report put out by Moody’s. It is night and day. Give me the data, not some puff piece of pseudoscientific nonsense passing itself off as high finance.

    The original problems with trading whole loans, namely that there were too many parameters to support liquid markets, is no longer an issue. Look at WeatherBill. Look at Robin Hanson’s work on combinatorial market mechanism design. Falafulu, sure some smart people were recognized for their ground breaking work of decades ago. But the most recent winner of the John Bates Clark medal in Economics went to Susan Athey, who is doing some fantastic work in mechanism design.

    Computational power is such that we no longer need to pretend that all financial instruments have to be priced on with a slide rule. We have new marketplaces that can effectively support trade in financial instruments with high dimensionality. We have the computational power to let traders value these instruments. What we don’t have is the data.

    Give us the data and we will trade.

    For a less ranty take on my world view, check out my blog post at http://blog.i2pi.com/index.cgi?p=186

  • Falafulu Fisi

    Toby said…
    Also, if “some quant guys” had submitted an abstract about the crisis, there’s a good chance they would have been invited.

    Was the Etech conference announced at the Wilmott forum? That’s where all the quants hang out. Did you tip your ex colleague Prof. Andrew Lo at MIT financial engineering lab? I am sure that some quants out there from the industry would have love to attend if they knew about it, assuming that you didn’t announce the Etech conference to the communities of quants out there.

  • http://jesperandersen.net Jesper Andersen

    Thanks for all the feedback, this is great.

    Bex:
    Let’s be clear: we completely agree that there should be 50 ratings approaches; Freerisk is designed to do the heavy lifting of starting exploring quantitative risk measurement. There will never be any official Freerisk score, only a framework for others to create scores and publish them back into a single location, so that others can select the ratings approach they want with whatever strengths and weaknesses they may bring with them. The few models we presented aren’t specifically endorsed by us, only case studies to better understand the platform.

    The punitive incentives idea is interesting; another approach that deserves consideration.

    Falafulu:
    I don’t think it’s fair to lay any fault with the Etech team or O’Reilly. There were a number of industry people at our presentation, with a diverse set of opinions. I’m sure some of the stronger ones will make it out here in the comments, or elsewhere on the web. We certainly welcome any constructive feedback.

    There seems to be a confusion that we believe financial quants are the problem here. In fact that’s not true at all, Freerisk needs to appeal to quants for the algorithms that fuel it’s ecosystem of ratings. If anything, Freerisk raises the stakes in terms of depending on algorithmic and model-driven approaches to assessing risk. We simply introduce a more open arena for these approaches to compete, and provide a new set of market forces for existing credit rating agencies.

  • http://bexhuff.com bex

    @Jesper I see… so Freerisk is more of a system to help people “get started” doing risk analysis, and then they can fill in the quantitative analysis with their own “off the books” analysis.

    That makes sense… I can see how this adds value.

  • http://blog.kiwitobes.com Toby Segaran

    @Falafulu I really like Andrew Lo’s work (in fact, I can see his book on my shelf from where I’m sitting) but we’ve never worked together. I would love to have a conversation with him about this.

    Also, thanks for the tip about the Wilmott forum, it looks like a great resource.

  • http://i2pi.com Joshua Reich

    No idea why my comments aren’t being posted to this blog. I copied my last comment and ran it on my own blog. Discussion is beginning over there too.

    See http://blog.i2pi.com/?p=187

  • Falafulu Fisi

    Toby Segaran, I am interested in your riskfree project perhaps I can contribute at some stage once its taken off. I am interested in the domain of Financial numerical modeling and I have implemented a number of different models (various derivatives & fix-interests instruments). I started developing these algorithms for a hobby, but I have decided to go further and turn it into a commercial product. So, now I have free time to concentrate on it.

    The literatures are so huge that no one person in this domain can know everything from the left end of the spectrum to the far right. The best online free resources that I use are :

    REPEC (research papers in economic)

    SSRN (
    Social Science Research Network)

    There are other various economic/finance related journals available (many of those), if I see a title that I like, then I request a free copy from the author directly. I have always received a copy of the publication requested, since to buy an article from the publishers site they are expensive, ranging from $30 to $50 US to buy an article online.

    Joshua Reich said…
    modern computing power enables us to take a more nuanced view of the world.

    Yes, but that is what algorithmic trading is meant to do, high-end intensive computation that trades in a blink of a second when the opportunity arises.

    Josh said…
    The market would be far better served if all available data for the underlying components and use their own information about their own risk profile to come up with better measures of value.

    Who owns the data? It costs to aggregate the data. So, if whoever is paying to get all the available data, it is sure that it is not free or otherwise that vendor wouldn’t be in business in the first place. It would be good if the data is free, but that is wishful thinking and we have to understand that it is the nature of free-market. Someone produces a service or a product, and someone else buys that service of product. Moody, Fitch and the rest are private companies. If a vendor feels oblige to do a free service to replace those commercial vendors, that’s fine, but do you think that some Santa Claus company out there which is willing to step in and do that? I doubt it.

  • http://i2pi.com Joshua Reich

    Falafulu: Who owns the data – well, the SEC mandates timely distribution of data the the general public. Currently that data is disseminated in a difficult to manage format. Serious investors pay thousands of dollars per month to be able to access the data in computer readable formats. While the SEC has been transitioning to XRBL, it hasn’t happened yet. And that is just on the 10-K equity side.

    For investors who don’t have that kind of cash, they pretty much are SOL when it comes to doing fundamental analysis, and in turn have to rely on the drivel you find in sell side reports.

    It is even worse when you look at structured products. If you’ve ever read an MBS offering document you’d know that it is a bloody nightmare to consider the work required to consolidate and normalize the data. So most investors rely on the CRA’s and the CDO spreads to figure out the details. An echo chamber ensues.

    I envision a world where by full disclosure in a standardized, normalized format is required to even get the attention of traders. At the moment the regulatory requirements are pretty thin.

    The current structure gives the NRSO’s a way to bypass Reg-FD in their attempt to formula
    te ratings. They have far more complete access to the companies books than the investing
    public – with the understanding that they will not reveal sensitive details to the public
    in a way that would violate the normal terms of FD. But they are allowed to use their in
    formation to formulate better models of default risk.

    But clearly they aren’t using their special regulatory status to make better models. So w
    e have to question why we allow private companies access to data that is not used to benefit the public.

    I don’t know what the solution is, but there are many low-hanging fruit between here and
    there that can be addressed to lower the costs of analysis so that everyone can make thei
    r own models and don’t have to rely on junk from the CRAs.

  • mateo

    tell the “Quant Guys” to start factoring “political corruption” “bailouts” and “too big to fail” variables – then you might have something worth looking at. seems like theyre only halfway done. nobel my ass.