Cathy O'Neil

Cathy O'Neil earned a Ph.D. in math from Harvard, was postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then chucked it and switched over to the private sector. She worked as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. For the last couple years she's been a data scientist in the New York start-up scene. She writes a blog at mathbabe.org and is involved with the #Occupy Wall Street Alternative Banking Working Group.

On Being a Data Skeptic

"Modelers have a bigger responsibility now than ever before."

On Being a Data Skeptic CoverPeople come to data science in all sorts of ways. I happen to be someone who came via finance. Trained as a mathematician, I worked first at a hedge fund and then a financial risk software company, each for about two years, starting in June 2007 and ending in February 2011. If you look at those dates again, you’ll realize I had a front row seat for the financial crisis.

I worked on a few projects in algorithmic trading with Larry Summers at the hedge fund and was invited, along with the other quants at Shaw, to see him discuss the impending doom one evening with Alan Greenspan and Robert Rubin. It honestly kind of surprised and shocked me to see how little they seemed to know, or at least admitted to knowing, about the true situation in the markets. These guys were supposed to be the experts, after all.

Read more…

Nate Silver confuses cause and effect, ends up defending corruption

A math band-aid will distract us from fixing the problems that so desperately need fixing.

This piece originally appeared on Mathbabe. We’re also including Jordan Ellenberg’s counter-point to Cathy’s original post as well as Cathy’s response to Jordan. All of these pieces are republished with permission.


I just finished reading Nate Silver’s newish book, The Signal and the Noise: Why so many predictions fail – but some don’t.

The good news

First off, let me say this: I’m very happy that people are reading a book on modeling in such huge numbers – it’s currently eighth on the New York Times best seller list and it’s been on the list for nine weeks. This means people are starting to really care about modeling, both how it can help us remove biases to clarify reality and how it can institutionalize those same biases and go bad.

As a modeler myself, I am extremely concerned about how models affect the public, so the book’s success is wonderful news. The first step to get people to think critically about something is to get them to think about it at all.

Moreover, the book serves as a soft introduction to some of the issues surrounding modeling. Silver has a knack for explaining things in plain English. While he only goes so far, this is reasonable considering his audience. And he doesn’t dumb the math down.

In particular, Silver does a nice job of explaining Bayes’ Theorem. (If you don’t know what Bayes’ Theorem is, just focus on how Silver uses it in his version of Bayesian modeling: namely, as a way of adjusting your estimate of the probability of an event as you collect more information. You might think infidelity is rare, for example, but after a quick poll of your friends and a quick Google search you might have collected enough information to reexamine and revise your estimates.)

The bad news

Having said all that, I have major problems with this book and what it claims to explain. In fact, I’m angry.

It would be reasonable for Silver to tell us about his baseball models, which he does. It would be reasonable for him to tell us about political polling and how he uses weights on different polls to combine them to get a better overall poll. He does this as well. He also interviews a bunch of people who model in other fields, like meteorology and earthquake prediction, which is fine, albeit superficial.

What is not reasonable, however, is for Silver to claim to understand how the financial crisis was a result of a few inaccurate models, and how medical research need only switch from being frequentist to being Bayesian to become more accurate. Read more…