"data scientists" entries

Building enterprise data applications with open source components

The O’Reilly Data Show podcast: Dean Wampler on bounded and unbounded data processing and analytics.

Subscribe to the O’Reilly Data Show Podcast to explore the opportunities and techniques driving big data and data science.


I first found myself having to learn Scala when I started using Spark (version 0.5). Prior to Spark, I’d peruse books on Scala but just never found an excuse to delve into it. In the early days of Spark, Scala was a necessity — I quickly came to appreciate it and have continued to use it enthusiastically.

For this Data Show Podcast, I spoke with O’Reilly author and Typesafe’s resident big data architect Dean Wampler about Scala and other programming languages, the big data ecosystem, and his recent interest in real-time applications. Dean has years of experience helping companies with large software projects, and over the last several years, he’s focused primarily on helping enterprises design and build big data applications.

Here are a few snippets from our conversation:

Apache Mesos & the big data ecosystem

It’s a very nice capability [of Spark] that you can actually run it on a laptop when you’re developing or working with smaller data sets. … But, of course, the real interesting part is to run on a cluster. You need some cluster infrastructure and, fortunately, it works very nicely with YARN. It works very nicely on the Hadoop ecosystem. … The nice thing about Mesos over YARN is that it’s a much more flexible, capable resource manager. It basically treats your cluster as one giant machine of resources and gives you that illusion, ignoring things like network latencies and stuff. You’re just working with a giant machine and it allocates resources to your jobs, multiple users, all that stuff, but because of its greater flexibility, it cannot only run things like Spark jobs, it can run services like HDFS or Cassandra or Kafka or any of these tools. … What I saw was there was a situation here where we had maybe a successor to YARN. It’s obviously not as mature an ecosystem as the Hadoop ecosystem but not everybody needs that maturity. Some people would rather have the flexibility of Mesos or of solving more focused problems.

Read more…


Major players and important partnerships in the big data market

A fully data-driven market report maps and analyzes the intersections between companies.


Download our new free report “Mapping Big Data: A Data Driven Market Report” for insights into the shape and structure of the big data market.

Who are the major players in the big data market? What are the sectors that make up the market and how do they relate? Which among the thousands of partnerships are most important?

These are just a handful of questions we explore in-depth in the new O’Reilly report now available for free download: Mapping Big Data: A Data Driven Market Report. For this new report, San Francisco-based startup Relato mapped the intersection of companies throughout the data ecosystem — curating a network with tens of thousands of nodes and edges representing companies and partnerships in the big data space.

Relato created the network by extracting data from company home pages on the Web and analyzed it using social network analysis; market experts interpreted the results to yield the insights presented in the report. The result is a preview of the future of market reports. Read more…


Improving corporate planning through insight generation

Data storage and management providers are becoming key contributors for insight as a service.

350px-Nikola_Tesla,_with_his_equipment_Wellcome_M0014782Contrary to what many believe, insights are difficult to identify and effectively apply. As the difficulty of insight generation becomes apparent, we are starting to see companies that offer insight generation as a service.

Data storage, management and analytics are maturing into commoditized services, and the companies that provide these services are well-positioned to provide insight on the basis not just of data, but data access and other metadata patterns.

Companies like DataHero and Host Analytics [full disclosure: Host Analytics is one of my portfolio companies] are paving the way in the insight-as-a-service space. Host Analytics’ initial product offering was a cloud-based Enterprise Performance Management (EPM) Suite, but far more important is what they are now enabling for the enterprise: they have moved from being an EPM company to being an insight generation company.  In this post, I will discuss a few of the trends that have enabled insight as a service (IaaS) and discuss the general case of using a software-as-a-service (SaaS) EPM solution to corral data and deliver insight as a service as the next level of product.

Insight generation is the identification of novel, interesting, plausible and understandable relations among elements of a data set that a) lead to the formation of an action plan and b) result in an improvement as measured by a set of KPIs. The evaluation of the set of identified relations to establish an insight, and the creation of an action plan associated with a particular insight or insights, needs to be done within a particular context and necessitates the use of domain knowledge. Read more…


2015 Data Science Salary Survey

Revealing patterns in tools, tasks, and compensation through clustering and linear models.

Download the free “2015 Data Science Salary Survey” report to learn about tools, trends, and what pays (and what doesn’t) for data professionals.

Data scientists are constantly looking outward, tapping into and extracting information from all manner of data in ways hardly imaginable not long ago. Much of the change is technological — data collection has multiplied as well as our means of processing it — but an important cultural shift has played a part, too, evidenced by the desire of organizations to become “data-driven” and the wide availability of public APIs.

But how much do we look inward, at ourselves? The variety of data roles, both in subject and method, means that even those of us who have a strong grasp of what it means to be a data scientist in a particular domain or sub-field may not have a complete view of the data space as a whole. Just as data we process and analyze for our organizations can be used to decide business actions, data about data scientists can help inform our career choices.

That’s where we come in. O’Reilly Media has been conducting an annual survey for data professionals, asking questions primarily about tools, tasks, and salary — and we are now releasing the third installment of the associated report, the 2015 Data Science Salary Survey. The 2015 edition features a completely new graphic design of the report and our findings. In addition to estimating salary differences based on demographics and tool usage, we have given a more detailed look at tasks — how data professionals spend their workdays — and titles. Read more…

Comments: 2

The security infusion

Building access policies into data stores.

Safe_Rob_Pongsajapan_Flickr_350pxHadoop jobs reflect the same security demands as other programming tasks. Corporate and regulatory requirements create complex rules concerning who has access to different fields in data sets, sensitive fields must be protected from internal users as well as external threats, and multiple applications run on the same data and must treat different users with different access rights. The modern world of virtualization and containers adds security at the software level, but tears away the hardware protection formerly offered by network segments, firewalls, and DMZs.

Furthermore, security involves more than saying yes or no to a user running a Hadoop job. There are rules for archiving or backing up data on the one hand, and expiring or deleting it on the other. Audit logs are a must, both to track down possible breaches and to conform to regulation.

Best practices for managing data in these complex, sensitive environments implement the well-known principle of security by design. According to this principle, you can’t design a database or application in a totally open manner and then layer security on top if you expect it to be robust. Instead, security must be infused throughout the system and built in from the start. Defense in depth is a related principle that urges the use of many layers of security, so that an intruder breaking through one layer may be frustrated by the next. Read more…


How advanced analytics are impacting sports

The expanding role of data analytics in a trillion-dollar industry.

Download our new free report “Data Analytics in Sports: How Playing with Data Transforms the Game,” by Janine Barlow, to learn how advanced predictive analytics are impacting the world of sports.

350px_Baseball_Players_Practicing_Thomas_Eakins_1875Sports are the perfect playing field on which data scientists can play their game — there are finite structures and distinct goals. Many of the components in sports break down numerically — e.g., number of players; length of periods; and, taking a broader view, how much each player is paid.

This is why sports and data have gone hand-in-hand since the very beginning of the industry. What, after all, is baseball without baseball cards?

In a new O’Reilly report, Data Analytics in Sports: How Playing with Data Transforms the Game, we explore the role of data analytics and new technology in the sports industry. Through a series of interviews with experts at the intersection of data and sports, we break down some of the industry’s most prominent advances in the use of data analytics and explain what these advances mean for players, executives, and fans.

Read more…


Old-school DRM and new-school analytics

Piracy isn’t the threat; it’s centuries old. Music Science is the game changer.

Download our new free report “Music Science: How Data and Digital Content are Changing Music,” by Alistair Croll, to learn more about music, data, and music science.

350px_Stephan_Sedlaczek_Mozart_am_SpinettIn researching how data is changing the music industry, I came across dozens of entertaining anecdotes. One of the recurring themes was music piracy. As I wrote in my previous post on music science, industry incumbents think of piracy as a relatively new phenomenon — as one executive told me, “vinyl was great DRM.”

But the fight between protecting and copying content has gone on for a long time, and every new medium for music distribution has left someone feeling robbed. One of the first known cases of copy protection — and illegal copying — involved Mozart himself.

As a composer, Mozart’s music spread far and wide. But he was also a performer and wanted to be able to command a premium for playing in front of audiences. One way he ensured continued demand was through “flourishes,” or small additions to songs, which weren’t recorded in written music. While Mozart’s flourishes are lost to history, researchers have attempted to understand how his music might once have been played. This video shows classical pianist Christina Kobb demonstrating a 19th century technique.

Read more…


Apache Drill: Tracking its history as an open source community

A strong, open user community needs to be fostered to reveal its potential.


A strong user community is essential to releasing the full potential of an open source project, and this influence is particularly important now for the newly developed Apache Drill project. Drill is a highly scalable SQL query engine for interactive access to a wide range of big data sources and formats. Some of the ways users have an impact are an expected part of the development process: by trying the software and reporting their experiences and use cases, users in the Drill community provide valuable feedback to developers as well as raise awareness with a larger audience of what this big data tool has to offer.

This advantage was especially important with early versions of the software; users have helped development of Drill from early days by reporting bugs and praising features that they like. And now, as Drill is reaching maturity and refinement, users likely will also provide additional innovations: experimenting with Drill in their own projects, they may find new ways to use it that had not occurred to the developers.

Drill’s flexibility and extensibility lend themselves to innovation, but there’s also a natural tendency for this type of change because the big data and Hadoop landscape also are evolving quickly. In the case of Drill, we’re seeing the “unexpectedness benefit” of openness: the community gets out ahead of the leadership in use cases and technological change.

The first big Apache Drill design meeting in September 2012 in San Jose set the tone of openness and inclusion. This was an open meeting, organized by Drill co-founder Tomer Shiran and Drill mentor Ted Dunning, and sponsored by MapR Technologies through the Bay Area Apache Drill User Group. More than 60 people attended in person, and Webex connected a larger, international audience. I recall that in addition to speaker-led presentations and discussion, long strips of paper were mounted around the room for participants to write on during breaks in order to provide ideas or offer specific ways they might want to be involved. Practical steps like this surfaced good ideas immediately, and signaled openness for future ones. Read more…

Comments: 2

Build better machine learning models

A beginner's guide to evaluating your machine learning models.


Everything today is being quantified, measured, and tracked — everything is generating data, and data is powerful. Businesses are using data in a variety of ways to improve customer satisfaction. For instance, data scientists are building machine learning models to generate intelligent recommendations to users so that they spend more time on a site. Analysts can use churn analysis to predict which customers are the best targets for the next promotional campaign. The possibilities are endless.

However, there are challenges in the machine learning pipeline. Typically, you build a machine learning model on top of your data. You collect more data. You build another model. But how do you know when to stop?

When is your smart model smart enough?

Evaluation is a key step when building intelligent business applications with machine learning. It is not a one-time task, but must be integrated with the whole pipeline of developing and productionizing machine learning-enabled applications.

In a new free O’Reilly report Evaluating Machine Learning Models: A Beginner’s Guide to Key Concepts and Pitfalls, we cut through the technical jargon of machine learning, and elucidate, in simple language, the processes of evaluating machine learning models. Read more…