"data" entries

Four short links: 3 March 2014

Four short links: 3 March 2014

Vanishing Money, Car Hackery, Data Literacy Course, and Cheaper CI

  1. The Programming Error That Cost Mt Gox 2609 Bitcoins — in the unforgiving world of crypto-currency, it’s easy to miscode and vanish your money.
  2. Ford Invites Open-Source Community to Tinker AwayOne example: Nelson has re-tasked the motor from a Microsoft Xbox 360 game controller to create an OpenXC shift knob that vibrates to signal gear shifts in a standard-transmission Mustang. The 3D-printed prototype shift knob uses Ford’s OpenXC research platform to link devices to the car via Bluetooth, and shares vehicle data from the on-board diagnostics port. Nelson has tested his prototype in a Ford Mustang Shelby GT500 that vibrates at the optimal time to shift.
  3. Making Sense of Data — Google online course on data literacy.
  4. Cost-Efficient Continuous Integration at Mozilla — CI on a big project can imply hundreds if not thousands of VMs on Amazon spinning up to handle compiles and tests. This blog post talks about Mozilla’s efforts to reduce its CI-induced spend without reducing the effectiveness of its CI practices.

Healthcare Lessons from the Data Sages at Strata

Other industries can show health care the way

This article was written with Ellen M. Martin.

Most healthcare clinicians don’t often think about donating or sharing data. Yet, after hearing Stephen Friend of Sage Bionetworks talk about involving citizens and patients in the field of genetic research at StrataRx 2012, I was curious to learn more.

McKinsey points out the 300 billion dollars in potential savings from using open data in healthcare, while a recent IBM Institute of Business Value study showed the need for corporate data collaboration.

Also, during my own research for Big Data in Healthcare: Hype and Hope, the resounding request from all the participants I interviewed was to “find more data streams to analyze.”

Read more…

Four short links: 10 February 2014

Four short links: 10 February 2014

Sterling Zings, Android Swings, Data Blings, and Visualized Things.

  1. Bruce Sterling at transmediale 2014 (YouTube) — “if it works, it’s already obsolete.” Sterling does a great job of capturing the current time: spies in your Internet, lost trust with the BigCos, the impermanence of status quo, the need to create. (via BoingBoing)
  2. No-one Should Fork Android (Ars Technica) — this article is bang on. Google Mobile Services (the Play functionality) is closed-source, what makes Android more than a bare-metal OS, and is where G is focusing its development. Google’s Android team treats openness like a bug and routes around it.
  3. Data Pipelines (Hakkalabs) — interesting overview of the data pipelines of Stripe, Tapad, Etsy, and Square.
  4. Visualising Salesforce Data in Minecraft — would almost make me look forward to using Salesforce. Almost.

Finding the Meaning in “Meaningful Use”

The 30,000-foot view and the nitty gritty details of working with electronic health data

Ever wonder what the heck “meaningful use” really means? By now, you’ve probably heard it come up in discussions of healthcare data. You might even know that it specifically pertains to electronic health records (EHRs). But what is it really about, and why should you care?

If you’ve ever had to carry a large folder of paper between specialists, or fill out the same medical history form in different offices over and over—with whatever details you happen to remember off the top of your head that day—then you already have some idea of why EHRs are a desirable thing. The idea is that EHRs will lead to better care—and better research data—through more complete and accurate record-keeping, and will eventually become part of health information exchanges (HIEs) with features like trend analysis and push-notifications. However, the mere installation of EHR software isn’t enough; we need not just cursory use but meaningful use of EHRs, and we need to ensure that the software being used meets certain standards of efficiency and security.

Read more…

Why is building custom recommender systems hard? Does it have to be?

guenstrin

Photo Courtesy of Carlos Guestrin

By Carlos Guestrin

Today, it’s shocking (and honestly exciting) how much of my daily experience is determined by a recommender system.  These systems drive amazing experiences everywhere, telling me where to eat, what to listen to, what to watch, what to read, and even who I should be friends with.  Furthermore, information overload is making recommender systems indispensable, since I can’t find what I want on the web simply using keyword search tools.  Recommenders are behind the success of industry leaders like Netflix, Google, Pandora, eHarmony, Facebook, and Amazon.  It’s no surprise companies want to integrate recommender systems with their own online experiences.  However, as I talk to team after team of smart industry engineers, it has become clear that building and managing these systems is usually a bit out of reach, especially given all the other demands on the team’s time.

Read more…

Learning Apache Mesos

In the summer of 2012, Accel Partners hosted an invitation-only Big Data conference at Stanford. Ping Li stood near the exit with a checkbook, ready to invest $1MM in pitches for real-time analytics on clusters. However, real-time means many different things. For MetaScale working on the Sears turnaround, real-time means shrinking a 6 hour window on a mainframe to 6 minutes on Hadoop. For a hedge fund, real-time means compiling Python to run on GPUs where milliseconds matter, or running on FPGA hardware for microsecond response.

With much emphasis on Hadoop circa 2012, one might think that no other clusters existed. Nothing could be further from the truth: Memcached, Ruby on Rails, Cassandra, Anaconda, Redis, Node.js, etc. – all in large-scale production use for mission critical apps, much closer to revenue than the batch jobs. Google emphasizes a related point in their Omega paper: scheduling batch jobs is not difficult, while scheduling services on a cluster is a hard problem, and that translates to lots of money.

Read more…

2013 Data Science Salary Survey

Tools, Trends, What Pays (and What Doesn't) for Data Professionals

salary_survey_coverThere is no shortage of news about the importance of data or the career opportunities within data. Yet a discussion of modern data tools can help us understand what the current data evolution is all about, and it can also be used as a guide for those considering stepping into the data space or progressing within it.

In our report, 2013 Data Science Salary Survey, we make our own data-driven contribution to the conversation. We collected a survey from attendees of the Strata Conference in New York and Santa Clara, California, about tool usage and salary.

Strata attendees span a wide spectrum within the data world: Hadoop experts and business leaders, software developers and analysts.  By no means does everyone use data on a “Big” scale, but almost all attendees have some technical aspect to their role.  Strata attendees may not represent a random sample of all professionals working with data, but they do represent a broad slice of the population.  If there is a bias, it is likely toward the forefront of the data space, with attendees using the newest tools (or being very interested in learning about them).

Read more…

Data Transformation

Skills of the Agile Data Wrangler

By Joe Hellerstein and Jeff Heer

As data processing has become more sophisticated, there has been little progress on improving the most time-consuming and tedious parts of the pipeline: Data Transformation tasks including discovery, structuring, and content cleaning . In standard practice, this kind of “data wrangling” requires writing idiosyncratic scripts in programming languages such as Python or R, or extensive manual editing using interactive tools such as Microsoft Excel. The result has two significantly negative outcomes. First, people with highly specialized skills (e.g., statistics, molecular biology, micro-economics) spend far more time in tedious data wrangling tasks than they do in exercising their specialty. Second, less technical users are often unable to wrangle their own data. The result in both cases is that significant data is often left unused due to the hurdle of transforming it into shape. Sadly, when it comes to standard practice in modern data analysis, “the tedium is the message.” In our upcoming tutorial at Strata, we will survey both sources and solutions to the problems of Data Transformation.

Analysts must regularly transform data to make it palatable to databases, statistics packages, and visualization tools. Data sets also regularly contain missing, extreme, duplicate or erroneous values that can undermine the results of analysis. These anomalies come from various sources, including human data entry error, inconsistencies between integrated data sets, and sensor interference. Our own interviews with data analysts have found that these types of transforms constitute the most tedious component of their analytic process. Flawed analyses due to dirty data are estimated to cost billions of dollars each year. Discovering and correcting data quality issues can also be costly: some estimate cleaning dirty data to account for 80 percent of the cost of data warehousing projects.

Read more…

Secure Reporting, a new life for EveryBlock, and predictions for 2014.

According to the Committee to Protect Journalists, 2013 was the second worst year on record for imprisoning journalists around the world for doing their work.

Which makes this story from PBS Idea Lab all the more important: How Journalists Can Stay Secure Reporting from Android Devices.  There are tips here on how to anonymize data flowing through your phone using Tor, an open network that helps protect against traffic analysis and network surveillance.  Also, there is information about video publishing software that facilitates YouTube posting, even if the site is blocked in your country. Very cool.

The Neiman Lab is publishing an ongoing series of Predictions for Journalism in 2014, and, predictably, the idea of harnessing data looms large. Hassan Hodges, director of innovation for the MLive Media Group, says that in this new journalism landscape, content will start to look more like data and data will look more like content.  Poderopedia founder Miguel Paz says that news organizations should fire the consultants and hire more nerds. There are 51 contributions so far, and counting. It’s good reading.

Read more…

Interactive Visualization of Big Data

By Jeffrey Heer

Human judgment is at the center of successful data analysis. This statement might initially seem at odds with the current Big Data frenzy and its focus on data management and machine learning methods. But while these tools provide immense value, it is important to remember that they are just that: tools. A hammer does not a carpenter make — though it certainly helps.

Consider the words of John Tukey 1, possibly the greatest statistician of the last half-century: “Nothing — not the careful logic of mathematics, not statistical models and theories, not the awesome arithmetic power of modern computers — nothing can substitute here for the flexibility of the informed human mind. Accordingly, both approaches and techniques need to be structured so as to facilitate human involvement and intervention.” Tukey goes on to write: “Some implications for effective data analysis are: (1) that it is essential to have convenience of interaction of people and intermediate results and (2) that at all stages of data analysis the nature and detail of output need to be matched to the capabilities of the people who use it and want it.” Though Tukey and colleagues voiced these sentiments nearly 50 years ago, they ring even more true today. The interested analyst is at the heart of the Big Data question: how well do our tools help users ask better questions, formulate hypotheses, spot anomalies, correct errors and create improved models and visualizations? To “facilitate human involvement” across “all stages of data analysis” is a grand challenge for our age.

Read more…