Jonathan Follett, editor of Designing for Emerging Technologies, recently sat down with Scott Stropkay, founding partner at Essential Design Service, and Bill Hartman, director of research at Essential Design Service, both of whom are also contributing authors for Designing for Emerging Technologies. Their conversation centers around the relationship dynamic between humans and robots, and they discuss ways that designers are being stretched in an interesting new direction.
Accepting human-robot relationships
Stropkay and Hartman discussed their work with telepresence robots. They shared the inherent challenges of introducing robots in a health care setting, but stressed that there’s tremendous opportunity for improving the health care experience:
“We think the challenges inherent in these kinds of scenarios are fascinating, how you get people to accept a robot in a relationship that you normally have with a person. Let’s say, a hospital setting — how do you develop acceptance from the team that’s not used to working with a robot as part of their functional team, how do you develop trust in those relationships, how do you engage people both practically and emotionally. How, as this scenario progresses, you bring robots into your home to monitor your recovery is one of the issues we’ve begun to address in our work.
“We’re pursuing other ideas in relations to using smart monitors, in the form of robot and robotic enhanced devices that can help you advance your improvement in behavior change over time … Ultimately, we’re thinking about some of the interesting science that’s happening with robots that you ingest that can learn about you and monitor you. There’s a world of fascinating issues about what you want to know, and how you might want to learn that, who gets access to this information, and how that interface could be designed.”
Designing for human-robot relationships
Follett asked Hartman to talk about a set of design principles for human-robot interaction, based on Jakob Nielsen’s usability heuristics:
“When Nielsen came up with that framework, I don’t think it was necessarily directed toward artificial intelligence down the road a few decades or even toward human-robot interactions, but those frameworks have proven to be quite valuable to user experience designers over and over again, as well as to usability testing experts in terms of things to look for.
“We can use those same principles and look for implications of robots serving our higher ordered needs, as we move from serving needs related to convenience or performance to actually supporting our decision making to emerging technologies, moving from being able to do anything or be magic in terms of the user interface to being more human in the user interface. That point about making an emotional appeal to humans, as well as a logical and credible appeal, to develop our degrees of trust is really critical. Where my kids go to school, they have a motto of ‘freedom and responsibility.’ As robots take on these higher ordered functions, they need to prove to their users that they are responsible enough to be given higher degrees of freedom in how they operate and how they support our decision making.”
Choice architecture in robotic design
Hartman described a critical part of the relationship between humans and robots: choice. We’ll need to find a balance in how choices are initiated, whether through humans or our robotic agents:
“I was listening to Barry Schwartz, the author of The Paradox of Choice, being interviewed recently on Fresh Air, and he described mutual funds that have been around a few decades. They automate the process of asset allocation and diverse investments for 401(k) plan participants, but when the numbers of mutual funds become too great, participation in 401(K) plans actually drops off. As robots take on higher degrees of autonomy and freedom in guiding our decision making, there are certain assumptions they will need to make along the way so we aren’t overwhelmed with the number of choices available to us and can make meaningful choices within realistic constraints. This notion of choice architecture and, as designers, choreographing these choice architectures in some of the algorithms that might go into the robotics is really, really key.”
Humans becoming robotic
Hartman and Stropkay discussed the changes and opportunities as humans look to improve their lives with robotics:
“It’s interesting because there are people we interact with today that are becoming robotic. Anybody who wears a cochlear implant, either to recover hearing or to actually hear for the first time — these are robotic. There are companies offering retinal implants that provide normal and even superhuman sensory inputs through your eyes that your brain can now interpret. There are people working on other kinds of sensors where you can understand your environment on levels that are beyond what a human can currently understand from their normal sensory array.
“It’s fascinating how much is happening, but how most of us don’t appreciate it because we’re not interacting with these people in these little communities, in these specialty areas. We’ll be seeing more of that moving into our lives and into our world. It’s going to get interesting as it relates to having a casual conversation with somebody who can, for example, see infrared in the room that you can’t see, and how they might use that information in one way or another.”
You can listen to the entire interview on the SoundCloud player above or on our SoundCloud stream.