Alice Zheng

Alice is the director of data science at GraphLab, a Seattle-based start-up that offers powerful large-scale machine learning and graph analytics tools. She loves playing with data and enabling others to play with data. She is a tool builder and an expert in machine learning algorithms. Her research spans software diagnosis, computer network security, and social network analysis. Prior to joining GraphLab, she was a researcher at Microsoft Research, Redmond. She holds Ph.D. and B.A. degrees in Computer Science, and a B.A. in Mathematics, from U.C. Berkeley.

Build better machine learning models

A beginner's guide to evaluating your machine learning models.

Stainless_Steel_Seamless_Pipe_and_Tube

Everything today is being quantified, measured, and tracked — everything is generating data, and data is powerful. Businesses are using data in a variety of ways to improve customer satisfaction. For instance, data scientists are building machine learning models to generate intelligent recommendations to users so that they spend more time on a site. Analysts can use churn analysis to predict which customers are the best targets for the next promotional campaign. The possibilities are endless.

However, there are challenges in the machine learning pipeline. Typically, you build a machine learning model on top of your data. You collect more data. You build another model. But how do you know when to stop?

When is your smart model smart enough?

Evaluation is a key step when building intelligent business applications with machine learning. It is not a one-time task, but must be integrated with the whole pipeline of developing and productionizing machine learning-enabled applications.

In a new free O’Reilly report Evaluating Machine Learning Models: A Beginner’s Guide to Key Concepts and Pitfalls, we cut through the technical jargon of machine learning, and elucidate, in simple language, the processes of evaluating machine learning models. Read more…

We make the software, you make the robots

An interview with Andreas Mueller, on scikit-learn and usable machine learning software.

Get notified when our free report “Evaluating Machine Learning Models: A beginner’s guide to key concepts and pitfalls,” by Alice Zheng, is available for download.

thesis_amueller-036

Superpixels example from Andreas Mueller’s thesis paper (PDF), used with permission.

A few weeks ago, I had the pleasure of sitting down (virtually, over Skype) with Andreas Mueller, core developer and maintainer of the popular scikit-learn machine learning library. We had previously bonded over our shared goals of making useful machine learning software, so I jumped at the chance to interview him.

Mueller wears many hats at work. He is one of the key maintainers of the popular Python machine learning library scikit-learn. Holding a doctorate in computer vision from the University of Bonn in Germany, he currently works on open science at New York University’s Center for Data Science. He speaks at conferences around the world and has a fanbase of 5,000+ followers on Twitter and about as many reputation points on Stack Overflow. In other words, this man has got mad street cred. He started out doing pure math in academia, and has now achieved software developer cult idol status. Read more…

Unpacking technical jargon in machine learning

A new report explores how to evaluate your machine learning models.

Mathematics_concept_collage_Fir0002_Wikimedia_Commons

Get notified when our free report “Evaluating Machine Learning Models: A beginner’s guide to key concepts and pitfalls” is available for download. Editor’s note: This is an excerpt of “Evaluating Machine Learning Models,” by Alice Zheng.


Alice Zheng will be part of the Data Science Summit and Dato Conference in July — a non-profit event jointly organized by Intel, Comcast, Pandora, Dato, Cloudera, and O’Reilly Media — in San Francisco. Visit the conference website for more information on the program. Use the discount code OREILLY20 and get 20% off either one or both days of the conference.

This report on evaluating machine learning models arose out of a sense of need. The content was first published as a series of six technical posts on the Dato Machine Learning Blog. I was the editor of the blog, and I needed something to publish for the next day. Dato builds machine learning tools that help users build intelligent data products. In our conversations with the community, we sometimes ran into a confusion in terminology. For example, people would ask for cross validation as a feature, when what they really meant was hyperparameter tuning, a feature we already had. So, I thought, “Aha! I’ll just quickly explain what these concepts mean and point folks to the relevant sections in the user guide.”

I sat down to write a blog post to explain cross validation, hold-out data sets, and hyperparameter tuning. After the first two paragraphs, however, I realized that it would take a lot more than a single blog post. The three terms sit at different depths in the concept hierarchy of machine learning model evaluation. Cross validation and hold-out validation are ways of chopping up a data set in order to measure the model’s performance on “unseen” data. Hyperparameter tuning, on the other hand, is a more “meta” process of model selection. But why does the model need “unseen” data, and what’s meta about hyperparameters? In order to explain all of that, I needed to start from the basics. First, I needed to explain the high-level concepts and how they fit together. Only then could I dive into each one in detail. Read more…

Embracing failure and learning from the Imposter Syndrome

What you miss with a "get it right the first time" mentality

Masks_Brian_Snelson_Flickr

Download our updated Women in Data report, which features four new profiles of women across the European Union. You can also pick-up a copy at Strata + Hadoop World London, where Alice Zheng will lead a session on Deploying Machine Learning in Production.

Lately, there has been a slew of media coverage about the Imposter Syndrome. Many columnists, bloggers, and public speakers have spoken or written about their own struggles with the Imposter Syndrome. And original psychological research on the Imposter Syndrome has found that out of every five successful people, two consider themselves a fraud.

I’m certainly no stranger to the sinking feeling of being out of place. During college and graduate school, it often seemed like everyone else around me was sailing through to the finish line, while I alone lumbered with the weight of programming projects and mathematical proofs. This led to an ongoing self-debate about my choice of a major and profession. One day, I noticed myself reading the same sentence over and over again in a textbook; my eyes were looking at the text, but my mind was saying, “Why aren’t you getting this yet? It’s so simple. Everybody else gets it. What’s wrong with you?”

When I look back upon those years, I have two thoughts: 1. That was hard. 2. What a waste of perfectly good brain cells! I could have done so many cool things if I had not spent all that time doubting myself.

But one can’t simply snap out of the Imposter Syndrome. It has a variety of causes, and it’s sticky. I was brought up with the idea of holding myself to a high standard, to measure my own progress against others’ achievements. Falling short of expectations is supposed to be a great motivator for action…or is it? Read more…

Striking parallels between mathematics and software engineering

Becoming more familiar with mathematics will help cross pollinate ideas between mathematics and software engineering.

Mathematics_Tom_Brown_Flickr

Editor’s note: Alice Zheng will be part of the team teaching Large-scale Machine Learning Day at Strata + Hadoop World NYC 2015. Visit the Strata + Hadoop World website for more information on the program.

During my first year in graduate school, I had an epiphany about mathematics that changed my whole perspective about the field. I had chosen to study machine learning, a cross-disciplinary research area that combines elements of computer science, statistics, and numerous subfields of mathematics, such as optimization and linear algebra. It was a lot to take in, and all of us first-year students were struggling to absorb the deluge of new concepts.

One night, I was sitting in the office trying to grok linear algebra. A wonderfully lucid textbook served as my guide: Introduction to Linear Algebra, written by Gilbert Strang. But I just wasn’t getting it. I was looking at various definitions — eigen decomposition, Jordan canonical forms, matrix inversions, etc. — and I thought, “Why?” Why does everything look so weird? Why is the inverse defined this way? Come to think of it, why are any of the matrix operations defined the way they are?

While staring at a hopeless wall of symbols, a flash of lightning went off in my mind. I had an insight: math is a design. Prior to that moment, I had approached mathematics as if it were universal truth: transcendent in its perfection, almost unknowable by mere mortals. But on that night, I realized that mathematics is a human-constructed tool. Math is designed, just like software programs are designed, and using many of the same design principles. These principles may not be apparent, but they are comprehensible. In that moment, mathematics went from being unknowable to reasonable. Read more…