Insight and analysis on the Internet of Things and the new hardware movement.
Practitioners, entrepreneurs, academics, and analysts came together in San Francisco this week to discuss the Internet of Things and the new hardware movement at the O’Reilly 2015 Solid Conference. Below we’ve assembled notable keynotes and interviews from the event.
Lock in, lock out: DRM in the real world
Author and activist Cory Doctorow uses his Solid keynote to passionately explain how computers are already entwined in our lives and our bodies, which means laws that support lock-in are much more than inconveniences. Doctorow also discusses Apollo 1201, a project from the Electronic Frontier Foundation that aims to eradicate digital rights management (DRM).
How we make cars is a bigger environmental issue than how we fuel them.
Around two billion cars have been built over the last 115 years; twice that number will be built over the next 35-40 years. The environmental and health impacts will be enormous. Some think the solution is electric cars or other low- or zero-emission vehicles. The truth is, if you look at the emissions of a car over its total life, you quickly discover that tailpipe emissions are just the tip of the iceberg.
An 85 kWh electric SUV may not have a tailpipe, but it has an enormous impact on our environment and health. A far greater percentage of a car’s total emissions come from the materials and energy required for manufacturing a car (mining, processing, manufacturing, and disposal of the car ), not the car’s operation. As leading environmental economist and vice chair of the National Academy of Sciences Maureen Cropper notes, “Whether we are talking about a conventional gasoline-powered automobile, an electric vehicle, or a hybrid, most of the damages are actually coming from stages other than just the driving of the vehicle.” If business continues as usual, we could triple the total global pollution generated by automobiles, as we go from two billion to six billion vehicles manufactured.
The conclusion from this is straightforward: how we make our cars is actually a bigger environmental issue than how we fuel our cars. We need to dematerialize — dramatically reduce the material and energy required to build cars — and we need to do it now. Read more…
WebAssembly – wasm – skips that final step, producing a binary format, technically a compressed AST encoding. Unless you’re going to be building compilers, you can compare wasm to a bytecode system. There is a text format for debugging, but the binary emphasis yields substantial extra speed as it skips parsing and minimizes decompression.
The O'Reilly Radar Podcast: Turing Award winner Michael Stonebraker on the future of data science.
Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.
In March 2015, database pioneer Michael Stonebraker was awarded the 2014 ACM Turing Award “for fundamental contributions to the concepts and practices underlying modern database systems.” In this week’s Radar Podcast, O’Reilly’s Mike Hendrickson sits down with Stonebraker to talk about winning the award, the future of data science, and the importance — and difficulty — of data curation.
One size does not fit all
Stonebraker notes that since about 2000, everyone has realized they need a database system, across markets and across industries. “Now, it’s everybody who’s got a big data problem,” he says. “The business data processing solution simply doesn’t fit all of these other marketplaces.” Stonebraker talks about the future of data science — and data scientists — and the tools and skill sets that are going to be required:
It’s all going to move to data science as soon as enough data scientists get trained by our universities to do this stuff. It’s fairly clear to me that you’re probably not going to retread a business analyst to be a data scientist because you’ve got to know statistics, you’ve got to know machine learning. You’ve got to know what regression means, what Naïve Bayes means, what k-Nearest Neighbors means. It’s all statistics.
All of that stuff turns out to be defined on arrays. It’s not defined on tables. The tools of future data scientists are going to be array-based tools. Those may live on top of relational database systems. They may live on top of an array database system, or perhaps something else. It’s completely open.
Entering the hardware space is easier than ever. Succeeding is a different matter.
Because of recent innovations in prototyping, crowdfunding, marketing, and manufacturing, it has never been easier — or cheaper — to launch a hardware startup than it is now. But while turning a hardware project into a product is now relatively easy, doing it successfully is still hard.
Renee DiResta and Ryan Vinyard, co-authors of The Hardware Startup, recently got together with Solid Conference chair Jon Bruner to discuss the startup landscape in hardware and the IoT, and what entrepreneurs need to know to build their businesses. Read more…
The O'Reilly Data Show Podcast: Phil Liu on the evolution of metric monitoring tools and cloud computing.
One of the main sources of real-time data processing tools is IT operations. In fact, a previous post I wrote on the re-emergence of real-time, was to a large extent prompted by my discussions with engineers and entrepreneurs building monitoring tools for IT operations. In many ways, data centers are perfect laboratories in that they are controlled environments managed by teams willing to instrument devices and software, and monitor fine-grain metrics.
During a recent episode of the O’Reilly Data Show Podcast, I caught up with Phil Liu, co-founder and CTO of SignalFx, a SF Bay Area startup focused on building self-service monitoring tools for time series. We discussed hiring and building teams in the age of cloud computing, building tools for monitoring large numbers of time series, and lessons he’s learned from managing teams at leading technology companies.
Evolution of monitoring tools
Having worked at LoudCloud, Opsware, and Facebook, Liu has seen first hand the evolution of real-time monitoring tools and platforms. Liu described how he has watched the number of metrics grow, to volumes that require large compute clusters:
One of the first services I worked on at LoudCloud was a service called MyLoudCloud. Essentially that was a monitoring portal for all LoudCloud customers. At the time, [the way] we thought about monitoring was still in a per-instance-oriented monitoring system. [Later], I was one of the first engineers on the operational side of Facebook and eventually became part of the infrastructure team at Facebook. When I joined, Facebook basically was using a collection of open source software for monitoring and configuration, so these are things that everybody knows — Nagios, Ganglia. It started out basically using just per-instance instant monitoring techniques, basically the same techniques that we used back at LoudCloud, but interestingly and very quickly as Facebook grew, this per-instance-oriented monitoring no longer worked because we went from tens or thousands of servers to hundreds of thousands of servers, from tens of services to hundreds and thousands of services internally.