"Interaction Design and Connected Devices" entries
The next big thing sits between tech’s push and consumers’ pull
Pilgrim Beart on AlertMe, and IoT’s challenges and promise.
I recently sat down with Pilgrim Beart, co-founder of AlertMe, which he recently sold to British Gas for $100 million. Beart is a computer engineer and founder of several startups, including his latest venture 1248.
Identifying the gap between technology and consumers: How AlertMe was founded
I asked Beart about the early thinking that led him and his co founder, Adrian Critchlow, to create AlertMe. The focus seems simple — identify user need. Beart explained:
I co-founded AlertMe with Adrian Critchlow. He was from more of a Web services background … My background was more embedded technology. Over a series of lunches in Cambridge where we both lived at the time, we just got to discussing two things, really. One was the way that technology was going. Technology push — what changes were happening that made certain things inevitable, and also consumer pull. What were the gaps that technology wasn’t really addressing?
To some extent we were discussing at quite a high level the intersection of those two, perhaps not quite in that rational way, but as we talked about things we were interested in, that’s essentially what we were doing. We were triangulating between the technology push and the consumer pull, and trying to spot things that essentially would be inevitable because of those two things. Then that led us to thinking about the connected home platform and what could the killer apps for the connected home be, and isn’t it strange how, if you compare the home to the car for example, cars have a large number of computers in them, and the computers all work together seamlessly and invisibly to keep you safe, keep you secure, save you energy, and so on.
In the home, you have a similar number of computers, but they’re not talking to each other, and as a result, it’s really far from ideal. You have no idea what’s going on in your home most of the time, and it’s not energy efficient, it’s not secure, etc. We saw a huge opportunity there, and we saw the potential for some technological advances to help address those problems.
A new dawn of car tech: customization through software, not hardware
Three ways entrepreneurs can bring the rate of progress we’ve seen in computing and communication to car tech.
Skeptics will cite the arduous three-to-six-year automotive design cycles, onerous qualification requirements, and thin margins that plague the automotive value chain. By attracting the greatest engineers and entrepreneurs, the car business of the early 20th century took us from horseback to stylish coupes within a generation, soon to be followed by tire-smoking muscle cars. Cars built during and after the late 80s pollute less over their lifetimes than their predecessors did parked. Sound like Moore’s Law to you? Read more…
What are iBeacons?
How to get started with proximity sensors.
Editor’s note: This is the first post in a series looking at beacon technology and the burgeoning beacon ecosystem.
Apple galvanized the whole area of proximity-enabled applications and services when it launched iBeacon at WWDC in June 2013. When iOS7 launched later that year, it was the first time support for a variety of proximity use cases was both designed in — and available at scale in — a mobile platform.
Since then, hundreds of companies have become involved in different ways in the iBeacon ecosystem — what I call the “Beacosystem.” These companies are making beacon hardware, offering proximity/iBeacon software platforms, creating shopper marketing platforms, using beacons to deliver signals for location analytics and mobile marketing solutions, powering indoor location services, and more.
This post introduces proximity and iBeacon, covers some background on how it works, and explains why there is some excitement and hype around the uses of proximity in various verticals, including retail.
Design’s role is to bridge context gaps
Andrew Hinton on making context understandable, smart devices, and programming literacy.
I sat down with Andrew Hinton, an information architect at The Understanding Group and author of the recently released O’Reilly book Understanding Context. Our conversation included a discussion of information architecture’s role in the context of the IoT, the complexities of context, and the well-debated “everyone should learn to code” argument.
Context, information architecture, and experience design
Information architecture (IA) has always been a critical part of creating great products and services, and many would argue that, until now, it hasn’t been given the attention or respect it deserves. The need for thoughtful IA is increasing as we enter the multimodal world of IoT. Whether you call yourself an Information Architect or Designer, you need to care about context. Hinton offers up this hidden motivation for writing Understanding Context:
“I’ll confess, the book is a bit of a Trojan horse to kind of get people to think about information architecture differently than maybe the way they assume they should think about it.”
I followed up with Hinton via email for a bit more on how we need to view IA:
“People tend to assume IA is mainly about arranging objects, the way we arrange cans in a cupboard or books in a library. That’s part of it, but the Internet has made it so that we co-exist in places made of semantic and digital information. So when we create or change the labels, relationships, and rules of those places, we change their environment. Not just on screens, but now outside of screens as well. And, to me, the central challenge of that work is making context understandable.”
Design’s return to artisan, at scale
The O'Reilly Radar Podcast: Matt Nish-Lapidus on design's circular evolution, and designing in the post-Industrial era.
In this week’s Radar Podcast episode, Jon Follett, editor of Designing for Emerging Technologies, chats with Matt Nish-Lapidus, partner and design director at Normative. Their discussion circles around the evolution of design, characteristics of post-Industrial design, and aesthetic intricacies of designing in networked systems. Also note, Nish-Lapidus will present a free webcast on these topics March 24, 2015.
Post-Industrial design relationships
Nish-Lapidus shares an interesting take on design evolution, from pre-Industrial to post-Industrial times, through the lens of eyeglasses. He uses eyeglasses as a case study, he says, because they’re a piece of technology that’s been used through a broad span of history, longer than many of the things we still use today. Nish-Lapidus walks us through the pre-Industrial era — so, Medieval times through about the 1800s — where a single craftsperson designed one product for a single individual; through the Industrial era, where mass-production took the main stage; to our modern post-Industrial era, where embedded personalization capabilities are bringing design almost full circle, back to a focus on the individual user:
“Once we move into this post-Industrial era, which we’re kind of entering now, the relationship’s starting to shift again, and glasses are a really interesting example. We go from having a single pair of glasses made for a single person, hand-made usually, to a pair of glasses designed and then mass-manufactured for a countless number of people, to having a pair of glasses that expresses a lot of different things. On one hand, you have something like Google Glass, which is still mass-produced, but the glasses actually contain embedded functionality. Then we also have, with the emergence of 3D printing and small-scale manufacturing, a return to a little bit of that artisan, one-to-one relationship, where you could get something that someone’s made just for you.
“These post-Industrial objects are more of an expression of the networked world in which we now live. We [again] have a way of building relationships with individual crafts-people. We also have objects that exist in the network themselves, as a physical instantiation of the networked environment that we live in.”
Your product can’t break its promise
Consumers are more aware of connected devices, but they need to be convinced a product will do something valuable for them.
Editor’s note: This is an excerpt by Claire Rowland from our upcoming book Designing Connected Products. This excerpt is included in our curated collection of chapters from the O’Reilly Design library, Designing for the Internet of Things.In 1962, the sociologist Everett Rogers introduced the idea of the technology lifecycle adoption curve, based on studies in agriculture. Rogers proposed that technologies are adopted in successive phases by different audience groups, based on a bell curve. This theory has gained wide traction in the technology industry. Successive thinkers have built upon it, such as the organizational consultant Geoffrey Moore in his book Crossing the Chasm.
In Rogers’ model, the early market for a product is composed of innovators (or technology enthusiasts) and early adopters. These people are inherently interested in the technology and willing to invest a lot of effort in getting the product to work for them. Innovators, especially, might be willing to accept a product with flaws as long as it represents a significant or interesting new idea.
The next two groups — the early and late majority — represent the mainstream market. Early majority users might take a chance on a new product if they have seen it used successfully by others whom they know personally. Late majority users are skeptical and will adopt a product only after seeing that the majority of other people are already doing so. Both groups are primarily interested in what the product can do for them, unwilling to invest significant time or effort in getting it to work, and intolerant of flaws. Different individuals can be in different groups for different types of product. A consumer could be an early adopter of video game consoles, but a late majority customer for microwave ovens. Read more…
Designing the dynamic human-robot relationship
Scott Stropkay and Bill Hartman on human-robot interaction, choice architecture, and developing degrees of trust.
Jonathan Follett, editor of Designing for Emerging Technologies, recently sat down with Scott Stropkay, founding partner at Essential Design Service, and Bill Hartman, director of research at Essential Design Service, both of whom are also contributing authors for Designing for Emerging Technologies. Their conversation centers around the relationship dynamic between humans and robots, and they discuss ways that designers are being stretched in an interesting new direction.
Accepting human-robot relationships
Stropkay and Hartman discussed their work with telepresence robots. They shared the inherent challenges of introducing robots in a health care setting, but stressed that there’s tremendous opportunity for improving the health care experience:
“We think the challenges inherent in these kinds of scenarios are fascinating, how you get people to accept a robot in a relationship that you normally have with a person. Let’s say, a hospital setting — how do you develop acceptance from the team that’s not used to working with a robot as part of their functional team, how do you develop trust in those relationships, how do you engage people both practically and emotionally. How, as this scenario progresses, you bring robots into your home to monitor your recovery is one of the issues we’ve begun to address in our work.
“We’re pursuing other ideas in relations to using smart monitors, in the form of robot and robotic enhanced devices that can help you advance your improvement in behavior change over time … Ultimately, we’re thinking about some of the interesting science that’s happening with robots that you ingest that can learn about you and monitor you. There’s a world of fascinating issues about what you want to know, and how you might want to learn that, who gets access to this information, and how that interface could be designed.”
Design to reflect human values
The O'Reilly Radar Podcast: Martin Charlier on industrial and interaction design, reflecting societal values, and unified visions.
Designing for the Internet of Things is requiring designers and engineers to expand the boundaries of their traditionally defined roles. In this Radar Podcast episode, O’Reilly’s Mary Treseler sat down with Martin Charlier, an independent design consultant and co-founder at raincloud.eu, to discuss the future of interfaces and the increasing need to merge industrial and interaction design in era of the Internet of Things.
Charlier stressed the importance of embracing the symbiotic nature of interaction design and service design:
“How I got into Internet of Things is interesting. My degree from Ravensbourne was in a very progressive design course that looked at product interaction and service design as one course. For us, it was pretty natural to think of product or services in a very open way. Whether they are connected or not connected didn’t really matter too much because it was basically understanding that technology is there to build almost anything. It’s really about how you design with that mind.
“When I was working in industrial design, it became really clear for me how important that is. Specifically, I remember one project working on a built-in oven … In this project, we specifically couldn’t change how you would interact with it. The user interface was already defined, and our task was to define how it looked. It became clear to me that I don’t want to exclude any one area, and it feels really unnatural to design a product but only worry about what it looks like and let somebody else worry about how it’s operated, or vice versa. Products in today’s world, especially, need to be thought about from all of these angles. You can’t really design a coffee maker anymore without thinking about the service that it might plug into or the systems that it connects to. You have to think about all of these things at the same time.”
An Internet of Things that do what they’re told
Our things are getting wired together, and you're not secure if you can't control the destiny of your private information.
The digital world has been colonized by a dangerous idea: that we can and should solve problems by preventing computer owners from deciding how their computers should behave. I’m not talking about a computer that’s designed to say, “Are you sure?” when you do something unexpected — not even one that asks, “Are you really, really sure?” when you click “OK.” I’m talking about a computer designed to say, “I CAN’T LET YOU DO THAT DAVE” when you tell it to give you root, to let you modify the OS or the filesystem.
Case in point: the cell-phone “kill switch” laws in California and Minneapolis, which require manufacturers to design phones so that carriers or manufacturers can push an over-the-air update that bricks the phone without any user intervention, designed to deter cell-phone thieves. Early data suggests that the law is effective in preventing this kind of crime, but at a high and largely needless (and ill-considered) price.
To understand this price, we need to talk about what “security” is, from the perspective of a mobile device user: it’s a whole basket of risks, including the physical threat of violence from muggers; the financial cost of replacing a lost device; the opportunity cost of setting up a new device; and the threats to your privacy, finances, employment, and physical safety from having your data compromised. Read more…