Becoming data driven

DJ Patil and Hilary Mason's Data Driven: Creating a Data Culture is about building organizations that can take advantage of data.

I’m excited to see that DJ Patil and Hilary Mason‘s new ebook Data Driven: Creating a Data Culture is now available. It’s been a lot of fun working with DJ and Hilary over the past few months.

I’m not going to summarize their work here: you should read it. It’s based on the realization that merely assembling a bunch of people who understand statistics doesn’t do the job. You end up with a group of data specialists on the margins of the organization, who don’t have the ability to do anything more than be frustrated. If you don’t develop a data culture, if people don’t understand the value of data and how it can be used to inform discussions, you can build all the dashboards and Hadoop clusters you want, but they won’t help you.

Data is a powerful tool, but it’s easy to jump on the data bandwagon and miss the benefits. Data Driven: Creating a Data Culture is about building organizations that can really take advantage of data. Is that organization yours? Read more…

Comments: 2

The data lake model is a powerhouse for invention

In this O'Reilly Radar Podcast: Edd Dumbill on the data lake, and Rajiv Maheswaran on the science of moving dots.

In a recent blog post, Edd Dumbill, VP of strategy at Silicon Valley Data Science, wrote about the phrase “data lake.” Likening it to a dream, he described a data lake as “a place with data-centered architecture, where silos are minimized, and processing happens with little friction in a scalable, distributed environment…Data itself is no longer restrained by initial schema decisions, and can be exploited more freely by the enterprise.” He explained that he called it a “dream” because “we’ve a way to go to make the vision come true” — but noted he’s optimistic the dream can be realized.

Subscribe to the O’Reilly Radar Podcast

iTunes, SoundCloud, RSS

In this Radar Podcast epidsode, O’Reilly’s Mac Slocum sits down with Dumbill to talk about the data lake, the opportunities the model presents, and the driving forces behind the concept. Read more…

Comment

Lessons from next-generation data wrangling tools

Drawing inspiration from recent advances in data preparation.

DSC_6826_4754_Flickr

One of the trends we’re following is the rise of applications that combine big data, algorithms, and efficient user interfaces. As I noted in an earlier post, our interest stems from both consumer apps as well as tools that democratize data analysis. It’s no surprise that one of the areas where “cognitive augmentation” is playing out is in data preparation and curation. Data scientists continue to spend a lot of their time on data wrangling, and the increasing number of (public and internal) data sources paves the way for tools that can increase productivity in this critical area.

At Strata + Hadoop World New York, NY, two presentations from academic spinoff start-ups — Mike Stonebraker of Tamr and Joe Hellerstein and Sean Kandel of Trifacta — focused on data preparation and curation. While data wrangling is just one component of a data science pipeline, and granted we’re still in the early days of productivity tools in data science, some of the lessons these companies have learned extend beyond data preparation.

Scalability ~ data variety and size

Not only are enterprises faced with many data stores and spreadsheets, data scientists have many more (public and internal) data sources they want to incorporate. The absence of a global data model means integrating data silos, and data sources requires tools for consolidating schemas.

Random samples are great for working through the initial phases, particularly while you’re still familiarizing yourself with a new data set. Trifacta lets users work with samples while they’re developing data wrangling “scripts” that can be used on full data sets.
Read more…

Comments: 2

Security principles of bitcoin

The core principle in bitcoin is decentralization, and it has important implications for security.

Editor’s note: this is an excerpt from Chapter 10 of our recently released book Mastering Bitcoin, by Andreas Antonopoulos. You can read the full chapter here. Antonopoulos will be speaking at our upcoming event Bitcoin & the Blockchain, January 27, 2015, in San Francisco. Find out more about the event and reserve your spot here.

Securing bitcoin is challenging because bitcoin is not an abstract reference to value, like a balance in a bank account. Bitcoin is very much like digital cash or gold. You’ve probably heard the expression “Possession is nine tenths of the law.” Well, in bitcoin, possession is ten tenths of the law. Possession of the keys to unlock the bitcoin, is equivalent to possession of cash or a chunk of precious metal. You can lose it, misplace it, have it stolen, or accidentally give the wrong amount to someone. In every one of those cases, end users would have no recourse, just as if they dropped cash on a public sidewalk.

However, bitcoin has capabilities that cash, gold, and bank accounts do not. A bitcoin wallet, containing your keys, can be backed up like any file. It can be stored in multiple copies, even printed on paper for hardcopy backup. You can’t “backup” cash, gold, or bank accounts. Bitcoin is different enough from anything that has come before that we need to think about bitcoin security in a novel way too.

Security principles

The core principle in bitcoin is decentralization and it has important implications for security. A centralized model, such as a traditional bank or payment network, depends on access control and vetting to keep bad actors out of the system. By comparison, a decentralized system like bitcoin pushes the responsibility and control to the end users. Because security of the network is based on Proof-Of-Work, not access control, the network can be open and no encryption is required for bitcoin traffic. Read more…

Comments: 3

The promise and problems of big data

A look at the social and moral implications of living in a deeply connected, analyzed, and informed world.

Editor’s note: this is an excerpt from our new report Data: Emerging Trends and Technologies, by Alistair Croll. You can download the free report here.

We’ll now look at both the light and the shadows of this new dawn, the social and moral implications of living in a deeply connected, analyzed, and informed world. This is both the promise and the peril of big data in an age of widespread sensors, fast networks, and distributed computing.

Solving the big problems

The planet’s systems are under strain from a burgeoning population. Scientists warn of rising tides, droughts, ocean acidity, and accelerating extinction. Medication-resistant diseases, outbreaks fueled by globalization, and myriad other semi-apocalyptic Horsemen ride across the horizon.

Can data fix these problems? Can we extend agriculture with data? Find new cures? Track the spread of disease? Understand weather and marine patterns? General Electric’s Bill Ruh says that while the company will continue to innovate in materials sciences, the place where it will see real gains is in analytics.

It’s often been said that there’s nothing new about big data. The “iron triangle” of Volume, Velocity, and Variety that Doug Laney coined in 2001 has been a constraint on all data since the first database. Basically, you could have any two you want fairly affordably. Consider:

  • A coin-sorting machine sorts a large volume of coins rapidly, but assumes a small variety of coins. It wouldn’t work well if there were hundreds of coin types.
  • A public library, organized by the Dewey Decimal System, has a wide variety of books and topics, and a large volume of those books — but stacking and retrieving the books happens at a slow velocity.

What’s new about big data is that the cost of getting all three Vs has become so cheap it’s almost not worth billing for. A Google search happens with great alacrity, combs the sum of online knowledge, and retrieves a huge variety of content types. Read more…

Comment

The computing of distrust

A look at what lies ahead in the disenchanted age of postmodern computing.

Ominous_II_James_Loesch_Flickr

Sometime last summer, I ran into the phrase “postmodern computing.” I don’t remember where, but it struck me as a powerful way to understand an important shift in the industry. What is different in the industry? How are 2014 and 2015 different from 2004 and 2005?

If we’re going to understand what “postmodern computing” means, we first have to understand “modern” computing. And to do that, we also have to understand modernism and postmodernism. After all, “modern” and “postmodern” only have meaning relative to each other; they’re both about a particular historical arc, not a single moment in time.

Some years back, I was given a history of St. Barbara’s Greek Orthodox Church in New Haven, carefully annotated wherever a member of my family had played a part. One story that stood out from early in the 20th century was AHEPA: the American-Hellenic Progressive Association. The mere existence of that organization in the 1920s says more about modernism than any number of literary analyses. In AHEPA, and in many other similar societies crossing many churches and many ethnic groups, people were betting on the future. The future is going to be better than the present. We were poor dirt farmers in the Old Country; now we’re here, and we’re going to build a better future for ourselves and our children. Read more…

Comments: 4