Movement data is going to transform everything

The O'Reilly Radar Podcast: Rajiv Maheswaran on the science of moving dots, and Claudia Perlich on big data in advertising.

620px-Elephantsdream_vectorstill06

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.

In this week’s Radar Podcast episode, O’Reilly’s Mac Slocum chats with Rajiv Maheswaran, CEO of Second Spectrum. Maheswaran talks about machine learning applications in sports, the importance of context in measuring stats, and the future of real-time, in-game analytics.

Here are some highlights from their chat:

There’s a lot of parts of the game of basketball — pick and rolls, dribble hand-offs — that coaches really care about, about analyzing how it works on offense, how to guard them. Before big data and machine learning, people basically watched the games and marked them. It turns out that people are pretty bad at marking them accurately, and they also miss a ton of stuff. Right now, machine learning tells coaches, ‘This is how many pick and rolls these two players have had over the course of the season, how often they do all the different variations, what they’re good at, what they’re bad at.’ Coaches can really find tendencies that can help them play offense, play defense, far more efficiently, based off of machine learning.

What we’re doing is having the machine match human intuition. If I’m watching a game, I know that the shot is harder if I’m farther away, if I have multiple defenders, if they’re close, if they’re closing in on me, if I’m dribbling, the type of shot I’m taking. As a human, I watch this and I have an intuition about it. Now, by giving all that data to the machine, it can make a predictor that actually matches our intuition, and goes beyond it because it can put a number onto what our intuition tells us.

Read more…

Resolving transactional access and analytic performance trade-offs

The O’Reilly Data Show podcast: Todd Lipcon on hybrid and specialized tools in distributed systems.

Subscribe to the O’Reilly Data Show Podcast to explore the opportunities and techniques driving big data and data science.

350px-Dolderbrug_Steenwijk_inclusief_lichtontwerpIn recent months, I’ve been hearing about hybrid systems designed to handle different data management needs. At Strata + Hadoop World NYC last week, Cloudera’s Todd Lipcon unveiled an open source storage layer — Kudu —  that’s good at both table scans (analytics) and random access (updates and inserts).

While specialized systems will continue to serve companies, there will be situations where the complexity of maintaining multiple systems — to eke out extra performance — will be harder to justify.

During the latest episode of the O’Reilly Data Show Podcast, I sat down with Lipcon to discuss his new project a few weeks before it was released. Here are a few snippets from our conversation:

HDFS and Hbase

[Hadoop is] more like a file store. It allows you to upload files onto an arbitrarily sized cluster with 20-plus petabytes, in single clusters. The thing is, you can upload the files but you can’t edit them in place. To make any change, you have to basically put in a new file. What HBase does in distinction is that it has more of a tabular data model, where you can update and insert individual row-by- row data, and then randomly access that data [in] milliseconds. The distinction here is that HDFS is pretty good for large scans where you’re putting in a large data set, maybe doing a full parse over the data set to train a machine learning model or compute an aggregate. If any of that data changes on a frequent basis or if you want to stream the data in or randomly access individual customer records, you’re kind of out of luck on HDFS. Read more…

Specialized and hybrid data management and processing engines

A new crop of interesting solutions for the complexity of operating multiple systems in a distributed computing setting.

Shinkyo_Sacred_Bridge_Paul_Mannix_Flickr

The 2004 holiday shopping season marked the start of Amazon’s investigation into alternative database technologies that led to the creation of DynamoDB — a key-value storage system that went onto inspire several NoSQL projects.

A new group of startups began shifting away from the general-purpose systems favored by companies just a few years earlier. In recent years, we’ve seen a diverse set of DBMS technologies that specialize in handling particular workloads and data models such as OLTP, OLAP, search, RDF, XML, scientific applications, etc. The success and popularity of such systems reinforced the belief that in order to scale and “go fast,” specialized systems are preferable.

In distributed computing, the complexity of maintaining and operating multiple specialized systems has recently led to systems that bridge multiple workloads and data models. Aside from multi-model databases, there are an emerging number of storage and compute engines adept at handling different workloads and problems. At this week’s Strata + Hadoop World conference in NYC, I had a chance to interact with the creators of some of these new solutions.

OLTP (transactions) and OLAP (analytics)

One of the key announcements at Strata + Hadoop World this week was Project Kudu — an open source storage engine that’s good at both table scans (analytics) and random access (updates and inserts). Its creators are quick to point out that they aren’t out to beat specialized OLTP and OLAP systems. Rather, they’re shooting to build a system that’s “70-80% of the way there on both axes.” The project is very young and lacks enterprise features, but judging from the reaction at the conference, it’s something the big data community will be watching. Leading technology research firms have created a category for systems with related capabilities:  HTAP (Gartner) and Trans-analytics (Forrester).

Read more…

Accelerating real-time analytics with Spark

Integration of the data supply chain is key to a reliable and fast big data analytics deployment.

Cnc_plasma_cutting-crop

Watch our free webcast “Accelerating Advanced Analytics with Spark” to learn about the architecture, applications, and best practices of Apache Spark.

Apache Hadoop is a mature development framework, which coupled with its large ecosystem, and support and contributions from key players such as Cloudera, Hortonworks, and Yahoo, provides organizations with many tools to manage data of varying sizes.

In the past, Hadoop’s batch-oriented nature using MapReduce was sufficient to meet the processing needs of many organizations. However, increasing demands for faster processing of data have emerged. These demands have been driven by recent developments in streaming technologies, the Internet of Things (IoT) and real-time analytics, to name just a few. These new demands have required new processing models. One significant new technology today that is being used to meet these demands and is gaining considerable interest and widespread support is Apache Spark. Spark’s speed and versatility make it a key part of today’s big-data processing stack in industries from energy to finance. Read more…

On leadership

Dinner conversation turns into a career retrospective. Food for thought for leaders and leaders-to-be.

Toss Bhudvanbhen co-authored this post.

350px-Detaille_4th_French_hussar_at_Friedland

Over a recent dinner, my conversation with Toss Bhudvanbhen meandered into discussion of how much our jobs had changed since we entered the workforce. We started during the Dot-Com era. Technology was a relatively young field then (frankly, it still is) so there wasn’t a well-trodden career path. We just went with the flow.

Over time our titles changed from “software developer,” to “senior developer,” to “application architect,” and so on, until one day we realized that we were writing less code but sending more e-mails. Attending fewer code reviews but more meetings. Less worried about how to implement a solution, but more concerned with defining the problem and why it needed to be solved. We had somehow taken on leadership roles.

We’ve stuck with it. Toss now works as a principal consultant at Pariveda Solutions and my consulting work focuses on strategic matters around data and technology.

The thing is, we were never formally trained as management. We just learned along the way. What helped was that we’d worked with some amazing leaders, people who set great examples for us and recognized our ability to understand the bigger picture.

Read more…

Translating data into knowledge

Best practices for data preparation — what you need to know before data analysis can begin.

Download “Data Preparation in the Big Data Era,” a new free report to help you manage the challenges of data cleaning and preparation.

Joseph_Wright_of_Derby_The_AlchemistData is growing at an exponential rate worldwide, with huge business opportunities and challenges for every industry. In 2016, global Internet traffic will reach 90 exabytes per month, according to a recent Cisco report. The ability to manage and analyze an unprecedented amount of data will be the key to success for every industry.

To exploit the benefits of a big data strategy, a key question is how to translate all of that data into useful knowledge. To meet this challenge, a company first needs to have a clear picture of their strategic knowledge assets, such as their area of expertise, core competencies, and intellectual property.

Having a clear picture of the business model and the relationships with distributors, suppliers, and customers is extremely useful in order to design a tactical and strategic decision-making process. The true potential value of big data is only gained when placed in a business context, where data analysis drives better decisions — otherwise, it’s just data.

In a new O’Reilly report Data Preparation in the Big Data Era, we provide a step-by-step guide to manage the challenges of data cleaning and preparation — critical steps before effective data analysis can begin. We explore the common problems of data preparation and the different steps involved, including data cleaning, combination, and transformation. You’ll also learn about new products that deal with problem of data variety at scale, including Tamr’s solution, which curates data at scale using a combination of machine learning and expert feedback. Read more…