The Internet of Things will happily march along with lousy privacy and security, and we will be the poorer for it.
This refrain can be heard at IoT conferences, in opinion pieces in the press and in normative academic literature. If we don’t “get it right,” then consumers won’t embrace the IoT and all of the wonderful commercial and societal benefits it portends.
This is false.
It’s a nice idea, imagining that concern for privacy and security will curtail or slow technological growth. But don’t believe it: the Internet of Things will develop whether or not privacy and security are addressed. Economic imperative and technology evolution will impel the IoT and its tremendous potential for increased monitoring forward, but citizen concern plays a minor role in operationalizing privacy. Certainly, popular discourse on the subject is important, but developers, designers, policy-makers and manufacturers are the key actors in embedding privacy architectures within new connected devices. Read more…
Finding a gentle entry to a big space
Those aren’t the only barriers, though. Read more…
The O'Reilly Radar Podcast: Paco Nathan and Jesse Anderson on the evolution of the data training landscape.
Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.
Their discussion focuses on the training landscape in the big data ecosystem, their teaching techniques and particular content they choose, and a look at some expected future trends.
Here are a few snippets from their chat:
Training vs PowerPoint slides
Anderson: “Often, when you have a startup and somebody says, ‘Well, we need some training,’ what will usually happen is one of the software developers will say, ‘OK, I’ve done some training in the past and I’ll put together some PowerPoints.’ The differences between a training thing and doing some PowerPoints, like at a meetup, is that a training actually has to have hands-on exercises. It has to have artifacts that you use right there in class. You actually need to think through, these are concepts, these are things that the person will need to be successful in that project. It really takes a lot of time and it takes some serious expertise and some experience in how to do that.”
Nathan: “Early on, you would get some committer to go out and do a meetup, maybe talk about an extension to an API or whatever they were working on directly. If there was a client firm that came up and needed training, then they’d peel off somebody. As it evolved, that really didn’t work. That kind of model doesn’t scale. The other thing too is, you really do need people who understand instructional design, who really understand how to manage a classroom. Especially when it gets to any size, it’s not just a afterthought for an engineer to handle.” Read more…
The O’Reilly Data Show podcast: Dean Wampler on bounded and unbounded data processing and analytics.
Subscribe to the O’Reilly Data Show Podcast to explore the opportunities and techniques driving big data and data science.
I first found myself having to learn Scala when I started using Spark (version 0.5). Prior to Spark, I’d peruse books on Scala but just never found an excuse to delve into it. In the early days of Spark, Scala was a necessity — I quickly came to appreciate it and have continued to use it enthusiastically.
For this Data Show Podcast, I spoke with O’Reilly author and Typesafe’s resident big data architect Dean Wampler about Scala and other programming languages, the big data ecosystem, and his recent interest in real-time applications. Dean has years of experience helping companies with large software projects, and over the last several years, he’s focused primarily on helping enterprises design and build big data applications.
Here are a few snippets from our conversation:
Apache Mesos & the big data ecosystem
It’s a very nice capability [of Spark] that you can actually run it on a laptop when you’re developing or working with smaller data sets. … But, of course, the real interesting part is to run on a cluster. You need some cluster infrastructure and, fortunately, it works very nicely with YARN. It works very nicely on the Hadoop ecosystem. … The nice thing about Mesos over YARN is that it’s a much more flexible, capable resource manager. It basically treats your cluster as one giant machine of resources and gives you that illusion, ignoring things like network latencies and stuff. You’re just working with a giant machine and it allocates resources to your jobs, multiple users, all that stuff, but because of its greater flexibility, it cannot only run things like Spark jobs, it can run services like HDFS or Cassandra or Kafka or any of these tools. … What I saw was there was a situation here where we had maybe a successor to YARN. It’s obviously not as mature an ecosystem as the Hadoop ecosystem but not everybody needs that maturity. Some people would rather have the flexibility of Mesos or of solving more focused problems.
Should algorithmic pricing be the norm rather than the exception?
Request an invitation to Next:Economy, our event aiming to shed light on the transformation in the nature of work now being driven by algorithms, big data, robotics, and the on-demand economy.
Companies want a bigger share of the pie than their competitors, capital wants a bigger share than labor (and labor wants right back), countries want a bigger share than their rivals, but true wealth comes when we make a bigger pie for everyone. Well run markets are a proven way to do that.
Surge pricing is one of Uber’s most interesting labor innovations. Faced with the problem that they don’t have enough drivers in particular neighborhoods or at particular hours, they use market mechanisms to bring more drivers to those areas. If they need more drivers, they raise the price to consumers until enough drivers are incented by the possibility of higher earnings to fill the demand. Pricing is not set arbitrarily. It is driven algorithmically by pickup time — the goal is to have enough cars on the road that a passenger will get a car within 3–5 minutes. (Lyft’s Prime Time pricing is a similar system.) Uber keeps raising the price until the pickup time falls into the desired range.
This is clearly an imperfect system. In one case, surge pricing gouged customers during a crisis, and even in more prosaic situations like bad weather, the end of a sporting event, or a holiday evening, customers can see enormous price hikes. This uncertainty undercuts the fundamental promise of the app, of cheap, on-demand transportation. If you don’t know how much the ride will cost, can you rely on it?
The O’Reilly Solid Podcast: The New York Times' deputy technology editor talks about technology, people, and power.
Subscribe to the O’Reilly Solid Podcast for insight and analysis about the Internet of Things and the worlds of hardware, software, and manufacturing.
In our new episode of the Solid Podcast, David Cranor and I talk with New York Times deputy technology editor Quentin Hardy. Hardy recorded with us just after visiting Facebook’s Aquila drone project, which promises to extend Internet access to remote parts of the globe — and to advance a slew of aerospace and communication technologies through open sourcing.
Projects like Aquila can challenge traditional government, but have their own tendency to create new mechanisms for control. “What’s clear is that existing systems of power will morph or collapse in decades to come because of these new technologies,” Hardy says, noting the contradiction that, in today’s world, “people have never been more empowered, and they’ve never been so controlled and repressed.”
We also talk about what’s happening in Shenzhen, China (which has been called “China’s Silicon Valley”), and the hardware hub’s dynamic mix of entrepreneurship, knockoffs, and innovation. Read more…