"Big Data" entries

Four short links: 13 January 2015

Four short links: 13 January 2015

Slack Culture, Visualizations of Text Analysis, Wearables and Big Data, and Snooping on Keyboards

  1. Building the Workplace We Want (Slack) — culture is the manifestation of what your company values. What you reward, who you hire, how work is done, how decisions are made — all of these things are representations of the things you value and the culture you’ve wittingly or unwittingly created. Nice (in the sense of small, elegant) explanation of what they value at Slack.
  2. Interpretation and Trust: Designing Model-Driven Visualizations for Text Analysis (PDF) — Based on our experiences and a literature review, we distill a set of design recommendations and describe how they promote interpretable and trustworthy visual analysis tools.
  3. The Internet of Things Has Four Big Data Problems (Alistair Croll) — What the IoT needs is data. Big data and the IoT are two sides of the same coin. The IoT collects data from myriad sensors; that data is classified, organized, and used to make automated decisions; and the IoT, in turn, acts on it. It’s precisely this ever-accelerating feedback loop that makes the coin as a whole so compelling. Nowhere are the IoT’s data problems more obvious than with that darling of the connected tomorrow known as the wearable. Yet, few people seem to want to discuss these problems.
  4. Keysweepera stealthy Arduino-based device, camouflaged as a functioning USB wall charger, that wirelessly and passively sniffs, decrypts, logs, and reports back (over GSM) all keystrokes from any Microsoft wireless keyboard in the vicinity. Designs and demo videos included.
Comment

The Internet of Things has four big data problems

The IoT and big data are two sides of the same coin; building one without considering the other is a recipe for doom.

Christopher_Thompson_junkyard_1_Flickr

The Internet of Things (IoT) has a data problem. Well, four data problems. Walking the halls of CES in Las Vegas last week, it’s abundantly clear that the IoT is hot. Everyone is claiming to be the world’s smartest something. But that sprawl of devices, lacking context, with fragmented user groups, is a huge challenge for the burgeoning industry.

What the IoT needs is data. Big data and the IoT are two sides of the same coin. The IoT collects data from myriad sensors; that data is classified, organized, and used to make automated decisions; and the IoT, in turn, acts on it. It’s precisely this ever-accelerating feedback loop that makes the coin as a whole so compelling.

Nowhere are the IoT’s data problems more obvious than with that darling of the connected tomorrow known as the wearable. Read more…

Comments: 12
Four short links: 12 January 2015

Four short links: 12 January 2015

Designed-In Outrage, Continuous Data Processing, Lisp Processors, and Anomaly Detection

  1. The Toxoplasma of RageIt’s in activists’ interests to destroy their own causes by focusing on the most controversial cases and principles, the ones that muddy the waters and make people oppose them out of spite. And it’s in the media’s interest to help them and egg them on.
  2. Samza: LinkedIn’s Stream-Processing EngineSamza’s goal is to provide a lightweight framework for continuous data processing. Unlike batch processing systems such as Hadoop, which typically has high-latency responses (sometimes hours), Samza continuously computes results as data arrives, which makes sub-second response times possible.
  3. Design of LISP-Based Processors (PDF) — 1979 MIT AI Lab memo on design of hardware specifically for Lisp. Legendary subtitle! LAMBDA: The Ultimate Opcode.
  4. rAnomalyDetection — Twitter’s R package for detecting anomalies in time-series data. (via Twitter Engineering blog)
Comment

Becoming data driven

DJ Patil and Hilary Mason's Data Driven: Creating a Data Culture is about building organizations that can take advantage of data.

I’m excited to see that DJ Patil and Hilary Mason‘s new ebook Data Driven: Creating a Data Culture is now available. It’s been a lot of fun working with DJ and Hilary over the past few months.

I’m not going to summarize their work here: you should read it. It’s based on the realization that merely assembling a bunch of people who understand statistics doesn’t do the job. You end up with a group of data specialists on the margins of the organization, who don’t have the ability to do anything more than be frustrated. If you don’t develop a data culture, if people don’t understand the value of data and how it can be used to inform discussions, you can build all the dashboards and Hadoop clusters you want, but they won’t help you.

Data is a powerful tool, but it’s easy to jump on the data bandwagon and miss the benefits. Data Driven: Creating a Data Culture is about building organizations that can really take advantage of data. Is that organization yours? Read more…

Comments: 3

Lessons from next-generation data wrangling tools

Drawing inspiration from recent advances in data preparation.

DSC_6826_4754_Flickr

One of the trends we’re following is the rise of applications that combine big data, algorithms, and efficient user interfaces. As I noted in an earlier post, our interest stems from both consumer apps as well as tools that democratize data analysis. It’s no surprise that one of the areas where “cognitive augmentation” is playing out is in data preparation and curation. Data scientists continue to spend a lot of their time on data wrangling, and the increasing number of (public and internal) data sources paves the way for tools that can increase productivity in this critical area.

At Strata + Hadoop World New York, NY, two presentations from academic spinoff start-ups — Mike Stonebraker of Tamr and Joe Hellerstein and Sean Kandel of Trifacta — focused on data preparation and curation. While data wrangling is just one component of a data science pipeline, and granted we’re still in the early days of productivity tools in data science, some of the lessons these companies have learned extend beyond data preparation.

Scalability ~ data variety and size

Not only are enterprises faced with many data stores and spreadsheets, data scientists have many more (public and internal) data sources they want to incorporate. The absence of a global data model means integrating data silos, and data sources requires tools for consolidating schemas.

Random samples are great for working through the initial phases, particularly while you’re still familiarizing yourself with a new data set. Trifacta lets users work with samples while they’re developing data wrangling “scripts” that can be used on full data sets.
Read more…

Comments: 3

Top keynotes at Strata Conference and Strata + Hadoop World 2014

From data privacy to real-world problem solving, O’Reilly’s data editors highlight the best of the best talks from 2014.

2014 was a year of tremendous growth in the field of data, as it was as well for Strata and Strata + Hadoop World, O’Reilly’s and Cloudera’s series of data conferences. At Strata, keynotes, individual sessions, and tracks like Hardcore Data Science, Hadoop and Beyond, Data-Driven Business Day, and Design & Interfaces, among others, explore the cutting-edge aspects of how to gather, store, wrangle, analyze, visualize, and make decisions with the vast amounts of data on our hands today. Looking back on the past year of Strata, the O’Reilly data editors chose our top keynotes from Strata Santa Clara, Strata Barcelona, and Strata + Hadoop World NYC.

It was tough to winnow the list down from an exceptional set of keynotes. Visit the O’Reilly YouTube channel for a larger set of 2014 keynotes, or Safari for videos of the keynotes and many of the conference sessions.

Best of the best

  • Julia Angwin reframes the issue of data privacy as justice, due process, and human rights (and her account of trying to buy better privacy goods and services is both instructive and funny).

  • Read more…

Comment: 1

The promise and problems of big data

A look at the social and moral implications of living in a deeply connected, analyzed, and informed world.

Editor’s note: this is an excerpt from our new report Data: Emerging Trends and Technologies, by Alistair Croll. You can download the free report here.

We’ll now look at both the light and the shadows of this new dawn, the social and moral implications of living in a deeply connected, analyzed, and informed world. This is both the promise and the peril of big data in an age of widespread sensors, fast networks, and distributed computing.

Solving the big problems

The planet’s systems are under strain from a burgeoning population. Scientists warn of rising tides, droughts, ocean acidity, and accelerating extinction. Medication-resistant diseases, outbreaks fueled by globalization, and myriad other semi-apocalyptic Horsemen ride across the horizon.

Can data fix these problems? Can we extend agriculture with data? Find new cures? Track the spread of disease? Understand weather and marine patterns? General Electric’s Bill Ruh says that while the company will continue to innovate in materials sciences, the place where it will see real gains is in analytics.

It’s often been said that there’s nothing new about big data. The “iron triangle” of Volume, Velocity, and Variety that Doug Laney coined in 2001 has been a constraint on all data since the first database. Basically, you could have any two you want fairly affordably. Consider:

  • A coin-sorting machine sorts a large volume of coins rapidly, but assumes a small variety of coins. It wouldn’t work well if there were hundreds of coin types.
  • A public library, organized by the Dewey Decimal System, has a wide variety of books and topics, and a large volume of those books — but stacking and retrieving the books happens at a slow velocity.

What’s new about big data is that the cost of getting all three Vs has become so cheap it’s almost not worth billing for. A Google search happens with great alacrity, combs the sum of online knowledge, and retrieves a huge variety of content types. Read more…

Comment
Four short links: 6 January 2015

Four short links: 6 January 2015

IoT Protocols, Predictive Limits, Machine Learning and Security, and 3D-Printing Electronics

  1. Exploring the Protocols of the Internet of Things (Sparkfun) — Arduino and Arduino-like IoT “things” especially, with their limited flash and SRAM, can benefit from specially crafted IoT protocols.
  2. Complexity Salon: Ebola (willowbl00) — These notes were taken at the 2014.Dec.18 New England Complex Systems Institute Salon focused on Ebola. […] Why don’t we engage in risks in a more serious way? Everyone thinks their prior experience indicates what will happen in the future. Look at past Ebola! It died down before going far, surely it won’t be bad in the future.
  3. Machine Learning Methods for Computer Security (PDF) — papers on topics such as adversarial machine learning, attacking pattern recognition systems, data privacy and machine learning, machine learning in forensics, and deceiving authorship detection.
  4. voxel8Using Voxel8’s 3D printer, you can co-print matrix materials such as thermoplastics and highly conductive silver inks enabling customized electronic devices like quadcopters, electromagnets and fully functional 3D electromechanical assemblies.
Comment
Four short links: 1 January 2015

Four short links: 1 January 2015

Wearables Killer App, Open Government Data, Gender From Name, and DVCS for Geodata

  1. Killer App for Wearables (Fortune) — While many corporations are still waiting to see what the “killer app” for wearables is, Disney invented one. The company launched the RFID-enabled MagicBands just over a year ago. Since then, they’ve given out more than 9 million of them. Disney says 75% of MagicBand users engage with the “experience”—a website called MyMagic+—before their visit to the park. Online, they can connect their wristband to a credit card, book fast passes (which let you reserve up to three rides without having to wait in line), and even order food ahead of time. […] Already, Disney says, MagicBands have led to increased spending at the park.
  2. USA Govt Depts Progress on Open Data Policy (labs.data.gov) — nice dashboard, but who will be watching it and what squeeze will they apply?
  3. globalnamedataWe have collected birth record data from the United States and the United Kingdom across a number of years for all births in the two countries and are releasing the collected and cleaned up data here. We have also generated a simple gender classifier based on incidence of gender by name.
  4. geogigan open source tool that draws inspiration from Git, but adapts its core concepts to handle distributed versioning of geospatial data.
Comment: 1

Apache Spark’s journey from academia to industry

In this O'Reilly Data Show Podcast: Ion Stoica talks about the rise of Apache Spark and Apache Mesos.

Three projects from UC Berkeley’s AMPLab have been keenly adopted by industry: Apache Mesos, Apache Spark, and Tachyon. As an early user, it’s been fun to watch Spark go from an academic lab to the most active open source project in big data. In my recent travels, I’ve met Spark users from companies of all sizes and and from many industries. I’ve also spoken with companies that came of age before Spark was available or mature enough, and many are replacing homegrown tools with Spark (Full disclosure: I’m an advisor to Databricks, a start-up commercializing Apache Spark..)

Subscribe to the O’Reilly Data Show Podcast

iTunes, SoundCloud, RSS

A few months ago, I spoke with UC Berkeley Professor and Databricks CEO Ion Stoica about the early days of Spark and the Berkeley Data Analytics Stack. Ion noted that by the time his students began work on Spark and Mesos, his experience at his other start-up Conviva had already informed some of the design choices:

“Actually, this story started back in 2009, and it started with a different project, Mesos. So, this was a class project in a class I taught in the spring of 2009. And that was to build a cluster management system, to be able to support multiple cluster computing frameworks like Hadoop, at that time, MPI and others. To share the same cluster as the data in the cluster. Pretty soon after that, we thought about what to build on top of Mesos, and that was Spark. Initially, we wanted to demonstrate that it was actually easier to build a new framework from scratch on top of Mesos, and of course we wanted it to be also special. So, we targeted workloads for which Hadoop at that time was not good enough. Hadoop was targeting batch computation. So, we targeted interactive queries and iterative computation, like machine learning. Read more…

Comment