FEATURED STORY

How to build and run your first deep learning network

Step-by-step instruction on training your own neural network.

NeuralTree

When I first became interested in using deep learning for computer vision I found it hard to get started. There were only a couple of open source projects available, they had little documentation, were very experimental, and relied on a lot of tricky-to-install dependencies. A lot of new projects have appeared since, but they’re still aimed at vision researchers, so you’ll still hit a lot of the same obstacles if you’re approaching them from outside the field.

In this article — and the accompanying webcast — I’m going to show you how to run a pre-built network, and then take you through the steps of training your own. I’ve listed the steps I followed to set up everything toward the end of the article, but because the process is so involved, I recommend you download a Vagrant virtual machine that I’ve pre-loaded with everything you need. This VM lets us skip over all the installation headaches and focus on building and running the neural networks. Read more…

Comment

Materials that make up our world

Digital manufacturing is the future — reusable, composable, and rapid from top to bottom.

lego_blocks

Editor’s note: This is part two of a two-part series reflecting on the O’Reilly Solid Conference from the perspective of a data scientist. Normally we wouldn’t publish takeaways from an event held nearly two months ago, but these insights were so good we thought they needed to be shared.

In mid-May, I was at Solid, O’Reilly’s new conference on the convergence of hardware and software. In Part one of this series, I talked about the falling cost of bringing a hardware start-up to market, about the trends leading to that drop, and a few thoughts on how that relates to the role of a data scientist.

I mentioned two phrases that I’ve heard Jon Bruner say, in one form or another. The first, “merging of hardware and software,” was covered in the last piece. The other is the “exchange between the virtual and actual.” I also mentioned that I think the material future of physical stuff is up for grabs. What does that mean, and how do those two sentiments tie together? Read more…

Comment

A good nudge trumps a good prediction

Identifying the right evaluation methods is essential to successful machine learning.

Editor’s note: this is part of our investigation into analytic models and best practices for their selection, deployment, and evaluation.

We all know that a working predictive model is a powerful business weapon. By translating data into insights and subsequent actions, businesses can offer better customer experience, retain more customers, and increase revenue. This is why companies are now allocating more resources to develop, or purchase, machine learning solutions.

While expectations on predictive analytics are sky high, the implementation of machine learning in businesses is not necessarily a smooth path. Interestingly, the problem often is not the quality of data or algorithms. I have worked with a number of companies that collected a lot of data; ensured the quality of the data; used research-proven algorithms implemented by well­-educated data scientists; and yet, they failed to see beneficial outcomes. What went wrong? Doesn’t good data plus a good algorithm equal beneficial insights? Read more…

Comment

Embracing hardware data

Looking at the collision of hardware and software through the eyes of a data scientist.

Raspberry Pi Board. Via Wikimedia Commons

Many aspects of a hardware device can be liberally prototyped. A Raspberry Pi (such as the one seen above) can function as a temporary bridge before ARM circuit boards are put into place.

Editor’s note: This is part one of a two-part series reflecting on the O’Reilly Solid Conference from the perspective of a data scientist. Normally we wouldn’t publish takeaways from an event held nearly two months ago, but these insights were so good we thought they needed to be shared.

In mid-May, I was at Solid, O’Reilly’s new conference on the convergence of hardware and software. I went in as something close to a blank slate on the subject, as someone with (I thought) not very strong opinions about hardware in general.

The talk on the grapevine in my community, data scientists who tend to deal primarily with web data, was that hardware data was the next big challenge, the place that the “alpha geeks” were heading. There are still plenty of big problems left to solve on the web, but I was curious enough to want to go check out Solid to see if I was missing out on the future. I don’t have much experience with hardware — beyond wiring up LEDs as a kid, making bird houses in shop class in high school, and mucking about with an Arduino in college. Read more…

Comment: 1

There are many use cases for graph databases and analytics

Business users are becoming more comfortable with graph analytics.

GraphLab graphThe rise of sensors and connected devices will lead to applications that draw from network/graph data management and analytics. As the number of devices surpasses the number of people — Cisco estimates 50 billion connected devices by 2020 — one can imagine applications that depend on data stored in graphs with many more nodes and edges than the ones currently maintained by social media companies.

This means that researchers and companies will need to produce real-time tools and techniques that scale to much larger graphs (measured in terms of nodes & edges). I previously listed tools for tapping into graph data, and I continue to track improvements in accessibility, scalability, and performance. For example, at the just-concluded Spark Summit, it was apparent that GraphX remains a high-priority project within the Spark1 ecosystem.

Read more…

Comments: 4

New approaches to anomaly detection

A practical example of how anomaly detection makes complex data problems easier to solve.

Dots

As new tools for distributed storage and analysis of big data are becoming more stable and widely known, there is a growing need for discovering best practices for analytics at this scale. One of the areas of widespread interest that crosses many verticals is anomaly detection.

At its best, anomaly detection is used to find unusual, rarely occurring events or data for which little is known in advance. Examples include changes in sensor data reported for a variety of parameters, suspicious behavior on secure websites, or unexpected changes in web traffic. In some cases, the data patterns being examined are simple and regular and, thus, fairly easy to model.

Anomaly detection approaches start with some essential but sometimes overlooked ideas about anomalies:

  • Anomalies are defined not by their own characteristics but in contrast to what is normal.

Thus …

  • Before you can spot an anomaly, you first have to figure out what “normal” actually is.

This need to first discover what is considered “normal” may seem obvious, but it is not always obvious how to do it, especially in situations with complicated patterns of behavior. Best results are achieved when you use statistical methods to build an adaptive model of events in the system you are analyzing as a first step toward discovering anomalous behavior. Read more…

Comment: 1