Power of the platforms

Uncertainty is a feature, not a bug.

Image: CC BY 2.0 NASA's Earth Observatory via Wikimedia Commons  http://commons.wikimedia.org/wiki/File:Southern_Lights.jpg#mediaviewer/File:Southern_Lights.jpg

After decades of work on programming, we finally got a development environment with massive reach and tremendous power. Somehow, though, the web isn’t centered on a comprehensive programming environment. The web succeeded with a (severely) lowest-common denominator, specification-driven approach that let it grow with time, technology, and multiple communities, across multiple platforms.

Almost two decades ago, I was all excited about Java. Write applets once, run anywhere, with libraries to make sure it all came out the same wherever anywhere might be. Java is still a powerhouse, but it all worked out differently than I expected. Even in Java’s early years, before the Java news was filled with security bulletins, applets felt like a strange mix with their surrounding web pages. Creating an applet demanded programmers to build every detail. Even with Java’s ever-improving libraries, creating a Java applet that did much was an intense experience focused on programming.

Java wasn’t the only comprehensive way to build web apps, of course. Flash demanded programming, but its values always incorporated design, action, and well, flash, in ways that meshed well with the way people built sites. Flash kept growing and growing before its ecosystem took a fatal hit from the iPhone as HTML5 offered replacements for some of its key strengths. I mostly notice Flash these days because it asks me to update it regularly and because pages tell me when it’s crashed.

Compared to either of those rich environments, web technology is a tangled mess. The early web was functional but unstyled, with no behavior beyond navigating among pages. That? That would dominate client-side computing? Read more…

Comment: 1

Beyond AI: artificial compassion

If what we are trying to build is artificial minds, intelligence might be the smaller, easier part.

LIght_of_ideas_Saad-Faruque_FlickrWhen we talk about artificial intelligence, we often make an unexamined assumption: that intelligence, understood as rational thought, is the same thing as mind. We use metaphors like “the brain’s operating system” or “thinking machines,” without always noticing their implicit bias.

But if what we are trying to build is artificial minds, we need only look at a map of the brain to see that in the domain we’re tackling, intelligence might be the smaller, easier part.

Maybe that’s why we started with it.

After all, the rational part of our brain is a relatively recent add-on. Setting aside unconscious processes, most of our gray matter is devoted not to thinking, but to feeling.

There was a time when we deprecated this larger part of the mind, as something we should either ignore or, if it got unruly, control.

But now we understand that, as troublesome as they may sometimes be, emotions are essential to being fully conscious. For one thing, as neurologist Antonio Damasio has demonstrated, we need them in order to make decisions. A certain kind of brain damage leaves the intellect unharmed, but removes the emotions. People with this affliction tend to analyze options endlessly, never settling on a final choice. Read more…

Comment

What is DevOps (yet again)?

Empathy, communication, and collaboration across organizational boundaries.

Cropped image "Kilobot robot swarm" by asuscreative - Own work. Licensed under CC BY-SA 4.0 via Wikimedia Commons.http://commons.wikimedia.org/wiki/File:Kilobot_robot_swarm.JPG#mediaviewer/File:Kilobot_robot_swarm.JPG

I might try to define DevOps as the movement that doesn’t want to be defined. Or as the movement that wants to evade the inevitable cargo-culting that goes with most technical movements. Or the non-movement that’s resisting becoming a movement. I’ve written enough about “what is DevOps” that I should probably be given an honorary doctorate in DevOps Studies.

Baron Schwartz (among others) thinks it’s high time to have a definition, and that only a definition will save DevOps from an identity crisis. Without a definition, it’s subject to the whims of individual interest groups, and ultimately might become a movement that’s defined by nothing more than the desire to “not be like them.” Dave Zwieback (among others) says that the lack of a definition is more of a blessing than a curse, because it “continues to be an open conversation about making our organizations better.” Both have good points. Is it possible to frame DevOps in a way that preserves the openness of the conversation, while giving it some definition? I think so.

DevOps started as an attempt to think long and hard about the realities of running a modern web site, a problem that has only gotten more difficult over the years. How do we build and maintain critical sites that are increasingly complex, have stringent requirements for performance and uptime, and support thousands or millions of users? How do we avoid the “throw it over the wall” mentality, in which an operations team gets the fallout of the development teams’ bugs? How do we involve developers in maintenance without compromising their ability to release new software?

Read more…

Comments: 2

Who should and should not be talking to your fridge?

A reflection on the social impacts of smarter hardware in the physical world.

Editor’s note: This is part of a series of posts exploring privacy and security issues in the Internet of Things. The series will culminate in a free webcast by the series author Dr. Gilad Rosner: Privacy and Security Issues in the Internet of Things will happen on February 11, 2015 — reserve your spot today.

My_Amazing_Fridge_Antonio_Roberts_FlickrHere’s the scenario today: I am out of milk, and my refrigerator sits there, mute and unsympathetic. Some time in the 90s, I was promised a fridge that would call the store when I was out of milk, and it would then be delivered while I, ignorant of my dearth of dairy, went about my business. Apparently such predictions were off. Someone forgot to tell my fridge manufacturer to put sensors, software, and networking gear into their products.

But there is hope. The dumb objects in the analog physical world are being slowly upgraded. From the very sexy telemetry systems in new BMWs to the very unsexy pallets of lettuce in a warehouse, Things That Heretofore Were Blind and Mute are getting eyes, ears, mouths, and in some cases, brains. This is evolution, not revolution, and while it is still slow-moving, it’s beneficial to reflect on some of the social impacts of smarter hardware in the physical world. Read more…

Comments: 4

A human-centered approach to data-driven design

The O'Reilly Radar Podcast: Arianna McClain on humanizing data-driven design, and Dirk Knemeyer on design in emerging tech.

This week on the O’Reilly Radar Podcast, O’Reilly’s Roger Magoulas talks with Arianna McClain, a senior hybrid design researcher at IDEO, about storytelling through data; the interdependent nature of qualitative and quantitative data; and the human-centered, data-driven design approach at IDEO.

Subscribe to the O’Reilly Radar Podcast

iTunes, SoundCloud, RSS

In their interview, Magoulas noted that in our research at O’Reilly, we’ve been talking a lot about the importance of the social science design element in getting the most out of data. McClain emphasized the importance of storytelling through data at IDEO and described IDEO’s human-centered approach to data-driven design:

“IDEO really believes in staying and remaining human-centered throughout the data journey. Starting off with, how might we measure something, how might we measure a behavior. We don’t sit in a room and come up with an algorithm or come up with a question. We start by talking to people. … We’re trying to build measures and survey questions to understand at scale how people make decisions. … IDEO remains data-driven to how we analyze and synthesize our findings. When we’re given a large data set, we don’t analyze it and write a report and give it to people and say, ‘This is the direction we think you should go.’

“Instead, we look at segmentations in the data, and stories in the data, and how the data clusters. Then we go back, and we try to find people who are representative of that cluster or that segmentation. The segmentations, again, are not based on demographic variables. They are based on needs and insights that we heard in our qualitative research. … What we’ve recognized is that something that seems so clear in the analysis is often very nuanced, and it can inform our design.”

Read more…

Comment

The evolution of GraphLab

The O'Reilly Data Show Podcast: Carlos Guestrin on the early days of GraphLab and the evolution of GraphLab Create.

Editor’s note: Carlos Guestrin will be part of the team teaching Large-scale Machine Learning Day at Strata + Hadoop World in San Jose. Visit the Strata + Hadoop World website for more information on the program.

I only really started playing around with GraphLab when the companion project GraphChi came onto the scene. By then I’d heard from many avid users and admired how their user conference instantly became a popular San Francisco Bay Area data science event. For this podcast episode, I sat down with Carlos Guestrin, co-founder/CEO of Dato, a start-up launched by the creators of GraphLab. We talked about the early days of GraphLab, the evolution of GraphLab Create, and what’s he’s learned from starting a company.

MATLAB for graphs

Guestrin remains a professor of computer science at the University of Washington, and GraphLab originated when he was still a faculty member at Carnegie Mellon. GraphLab was built by avid MATLAB users who needed to do large scale graphical computations to demonstrate their research results. Guestrin shared some of the backstory:

“I was a professor at Carnegie Mellon for about eight years before I moved to Seattle. A couple of my students, Joey Gonzales and Yucheng Low were working on large scale distributed machine learning algorithms specially with things called graphical models. We tried to implement them to show off the theorems that we had proven. We tried to run those things on top of Hadoop and it was really slow. We ended up writing those algorithms on top of MPI which is a high performance computing library and it was just a pain. It took a long time and it was hard to reproduce the results and the impact it had on us is that writing papers became a pain. We wanted a system for my lab that allowed us to write more papers more quickly. That was the goal. In other words so they could implement this machine learning algorithms more easily, more quickly specifically on graph data which is what we focused on.”

Read more…

Comment