Everything today is being quantified, measured, and tracked — everything is generating data, and data is powerful. Businesses are using data in a variety of ways to improve customer satisfaction. For instance, data scientists are building machine learning models to generate intelligent recommendations to users so that they spend more time on a site. Analysts can use churn analysis to predict which customers are the best targets for the next promotional campaign. The possibilities are endless.
However, there are challenges in the machine learning pipeline. Typically, you build a machine learning model on top of your data. You collect more data. You build another model. But how do you know when to stop?
When is your smart model smart enough?
Evaluation is a key step when building intelligent business applications with machine learning. It is not a one-time task, but must be integrated with the whole pipeline of developing and productionizing machine learning-enabled applications.
In a new free O’Reilly report Evaluating Machine Learning Models: A Beginner’s Guide to Key Concepts and Pitfalls, we cut through the technical jargon of machine learning, and elucidate, in simple language, the processes of evaluating machine learning models.
This report includes:
- An overview of the machine learning pipeline, and where evaluation is necessary
- An introduction to popular evaluation metrics for classification, regression, and ranking problems
- A disambiguation of popular terms such as cross validation and hyperparameter tuning
- A brief survey of modern hyperparameter tuning techniques
- Pitfalls and challenges of A/B testing
- Reasons to use multi-armed bandits for continuous optimization
You will also learn about recent developments in hyperparameter tuning — ways of automating the machine learning process to reduce the burden of human supervision.
Cropped image on article and category pages via Jatinsanghvi on Wikimedia Commons.