Drone delivery: Real or fantasy?

For the time being, we won't see drone delivery outside of a few very specialized use cases.

prime-air_high-resolution02I read with some interest an article on the Robotenomics blog about the feasibility of drone delivery. It’s an interesting idea, and the article makes a better case than anything I’ve seen before. But I’m still skeptical.

The article quotes direct operating costs (essentially fuel) that are roughly $0.10 for a 2-kilogram payload, delivered 10 kilometers. (For US-residents, that’s 4.4 pounds and about six miles). That’s reasonable enough.

The problem comes when he compares it to Amazon’s current shipping costs, of $2 to $8. But it sounds roughly like what Amazon pays to UPS or FedEx. And that’s not for delivering four pounds within a six-mile range. And it’s not just the fuel cost: it’s the entire cost, including maintenance, administrative overhead, executive bonuses, and (oh, yes) the driver’s salary. Read more…

Comments: 3

Designing for the unknown

Simon King on design intuition and designing solutions that work for the user both now and in an unforeseen future.

Design principles are being applied in all aspects of business today — they are no longer limited to graphic design, product design, web design or even experience design. I recently had the chance to speak with Simon King, design director and interaction design community lead at IDEO in Chicago. In our conversation, King talks about balancing design intuition with prototyping and testing, designing beyond the screen, and designing for the unknown.

At IDEO, they take a human-centered approach, observing the user in their environments. That research informs their design process, says King, but they also rely heavily on collaborative design teams with diverse experience, which helps to bring a fresh perspective to every project:

“Our project teams are generally dedicated in working together on one topic. They draw from all this inspiration. They utilize their intuition. They generate a bunch of ideas and build on the ideas of others. That’s really key to having these project teams of diverse designers together so we can build on each other’s ideas. Another big part of it is that in every project, people are working on totally different domains. They’re working in different industries. They’re working for different types of users. We can really cross-pollinate the things that we’ve seen in one area and apply them to another area during that ideation process.”

Read more…

Comments: 2

Building Apache Kafka from scratch

In this episode of the O'Reilly Data Show Podcast, Jay Kreps talks about data integration, event data, and the Internet of Things.

At the heart of big data platforms are robust data flows that connect diverse data sources. Over the past few years, a new set of (mostly open source) software components have become critical to tackling data integration problems at scale. By now, many people have heard of tools like Hadoop, Spark, and NoSQL databases, but there are a number of lesser-known components that are “hidden” beneath the surface.

In my conversations with data engineers tasked with building data platforms, one tool stands out: Apache Kafka, a distributed messaging system that originated from LinkedIn. It’s used to synchronize data between systems and has emerged as an important component in real-time analytics.

Subscribe to the O’Reilly Data Show Podcast

iTunes, SoundCloud, RSS

In my travels over the past year, I’ve met engineers across many industries who use Apache Kafka in production. A few months ago, I sat down with O’Reilly author and Radar contributor Jay Kreps, a highly regarded data engineer and former technical lead for Online Data Infrastructure at LinkedIn, and most recently CEO/co-founder of Confluent. Read more…

Comment

2014 Data Science Salary Survey

Salary insights from more than 800 data professionals reveal a correlation to skills and tools.

Data is growing: Whether in terms of data-driven applications, the diversity of tools or the actual quantities of data we collect and process, the data space is characterized by expansion. The excitement around data has been tempered in some circles — the first two query completion suggestions for a Google search of “Is data science” are “dead” and “a fad” — but from a practitioner’s perspective, things are looking quite rosy.

In the results of this year’s O’Reilly Media Data Science Salary Survey, we found a median total salary of $98k ($144k for US respondents only). The 816 data professionals in the survey included engineers, analysts, entrepreneurs, and managers (although almost everyone had some technical component in their role).

Why the high salaries? While the demand for data applications has increased rapidly, the number of people who set up the systems and perform advanced analytics has increased much more slowly. Newer tools such as Hadoop and Spark should have even fewer expert users, and correspondingly we found that users of these tools have particularly high salaries. Read more…

Comment: 1

Why the data center needs an operating system

It’s time for applications — not servers — to rule the data center.

1214-missing-operating-system-620

Developers today are building a new class of applications. These applications no longer fit on a single server, but instead run across a fleet of servers in a data center. Examples include analytics frameworks like Apache Hadoop and Apache Spark, message brokers like Apache Kafka, key-value stores like Apache Cassandra, as well as customer-facing applications such as those run by Twitter and Netflix.

These new applications are more than applications, they are distributed systems. Just as it became commonplace for developers to build multithreaded applications for single machines, it’s now becoming commonplace for developers to build distributed systems for data centers.

But it’s difficult for developers to build distributed systems, and it’s difficult for operators to run distributed systems. Why? Because we expose the wrong level of abstraction to both developers and operators: machines. Read more…

Comments: 13

Decoding bitcoin and the blockchain

Introducing Bitcoin & the Blockchain: An O’Reilly Radar Summit

Untitled_Martin_Flickr

When the creators of bitcoin solved the “double spend” problem in a decentralized manner, they introduced techniques that have implications far beyond digital currency. Our newly announced one-day event — Bitcoin & the Blockchain: An O’Reilly Radar Summit — is in line with our tradition of highlighting applications of developments in computer science. Financial services have long relied on centralized solutions, so in many ways, products from this sector have become canonical examples of the developments we plan to cover over the next few months. But many problems that require an intermediary are being reexamined with techniques developed for bitcoin.

How do you get multiple parties in a transaction to trust each other without an intermediary? In the case of a digital currency like bitcoin, decentralization means reaching consensus over an insecure network. As Mastering Bitcoin author Andreas Antonopoulos noted in an earlier post, several innovations lie at the heart of what makes bitcoin disruptive:

“Bitcoin is a combination of several innovations, arranged in a novel way: a peer-to-peer network, a proof-of-work algorithm, a distributed timestamped accounting ledger, and an elliptic-curve cryptography and key infrastructure. Each of these parts is novel on its own, but the combination and specific arrangement was revolutionary for its time and is beginning to show up in more innovations outside bitcoin itself.”

Read more…

Comment