What experts do: Curse of the intermediate

A framework for what separates those whose skills continue to build and those who stall out no matter how much they try.

We all know the Curse of Expertise — that thing that makes most experts awful at imagining what it’s like to be a novice. The Curse of Expertise makes tech editors weep and readers seethe. “Where’s the empathy?!” we say, as if the expert had a conscious choice. But they mostly don’t. The Curse of Expertise is not a problem for which MOAR EMPATHY is the solution. Experts don’t lack empathy; they lack the security clearance to the part of their brain where their cognitive biases live. Subconscious cognitive biases. And those biases don’t just make us (experts) fail at predicting the struggle of novices, they can also make us less likely to see novel solutions to well-worn problems.

But given a choice to suddenly be an expert or a novice, we’d pick Curse of the Sucks-To-Be-Me Expert over Curse of the I-Suck-At-This Novice. There’s a third curse, though. The mastery curve is, of course, not binary, but a continuum from first-time to Jiro-Dreams-Of-Sushi. And there in the middle? The Curse of the Intermediate. The Curse of the Intermediate is the worst because it’s the place where hopes and dreams of expertise go to die. The place where even the most patient practicer eventually believes they just don’t have what it takes. Read more…

Comment
Four short links: 2 March 2015

Four short links: 2 March 2015

Onboarding UX, Productivity Vision, Bad ML, and Lifelong Learning

  1. User Onboarding Teardowns — the UX of new users. (via Andy Baio)
  2. Microsoft’s Productivity Vision — always-on thinged-up Internet everywhere, with predictions and magic by the dozen.
  3. Machine Learning Done WrongWhen dealing with small amounts of data, it’s reasonable to try as many algorithms as possible and to pick the best one since the cost of experimentation is low. But as we hit “big data,” it pays off to analyze the data upfront and then design the modeling pipeline (pre-processing, modeling, optimization algorithm, evaluation, productionization) accordingly.
  4. Ten Simple Rules for Lifelong Learning According to Richard Hamming (PLoScompBio) — Exponential growth of the amount of knowledge is a central feature of the modern era. As Hamming points out, since the time of Isaac Newton (1642/3-1726/7), the total amount of knowledge (including but not limited to technical fields) has doubled about every 17 years. At the same time, the half-life of technical knowledge has been estimated to be about 15 years. If the total amount of knowledge available today is x, then in 15 years the total amount of knowledge can be expected to be nearly 2x, while the amount of knowledge that has become obsolete will be about 0.5x. This means that the total amount of knowledge thought to be valid has increased from x to nearly 1.5x. Taken together, this means that if your daughter or son was born when you were 34 years old, the amount of knowledge she or he will be faced with on entering university at age 17 will be more than twice the amount you faced when you started college.
Comment

Topic models: Past, present, and future

The O'Reilly Data Show Podcast: David Blei, co-creator of one of the most popular tools in text mining and machine learning.

card_catalog_2_bookfinch_Flickr

I don’t remember when I first came across topic models, but I do remember being an early proponent of them in industry. I came to appreciate how useful they were for exploring and navigating large amounts of unstructured text, and was able to use them, with some success, in consulting projects. When an MCMC algorithm came out, I even cooked up a Java program that I came to rely on (up until Mallet came along).

I recently sat down with David Blei, co-author of the seminal paper on topic models, and who remains one of the leading researchers in the field. We talked about the origins of topic models, their applications, improvements to the underlying algorithms, and his new role in training data scientists at Columbia University.

Generating features for other machine learning tasks

Blei frequently interacts with companies that use ideas from his group’s research projects. He noted that people in industry frequently use topic models for “feature generation.” The added bonus is that topic models produce features that are easy to explain and interpret:

“You might analyze a bunch of New York Times articles for example, and there’ll be an article about sports and business, and you get a representation of that article that says this is an article and it’s about sports and business. Of course, the ideas of sports and business were also discovered by the algorithm, but that representation, it turns out, is also useful for prediction. My understanding when I speak to people at different startup companies and other more established companies is that a lot of technology companies are using topic modeling to generate this representation of documents in terms of the discovered topics, and then using that representation in other algorithms for things like classification or other things.”

Read more…

Comment: 1

Design to reflect human values

The O'Reilly Radar Podcast: Martin Charlier on industrial and interaction design, reflecting societal values, and unified visions.

Abstract_Reflections_Francisco_Antunes_Flickr

Editor’s note: Martin Charlier will present a session, Prototyping User Experiences for Connected Products, at the O’Reilly Solid Conference, June 23 to 25, 2015, in San Francisco. For more on the program and information on registration, visit the Solid website.

Designing for the Internet of Things is requiring designers and engineers to expand the boundaries of their traditionally defined roles. In this Radar Podcast episode, O’Reilly’s Mary Treseler sat down with Martin Charlier, an independent design consultant and co-founder at Rain Cloud, to discuss the future of interfaces and the increasing need to merge industrial and interaction design in era of the Internet of Things.

Charlier stressed the importance of embracing the symbiotic nature of interaction design and service design:

“How I got into Internet of Things is interesting. My degree from Ravensbourne was in a very progressive design course that looked at product interaction and service design as one course. For us, it was pretty natural to think of product or services in a very open way. Whether they are connected or not connected didn’t really matter too much because it was basically understanding that technology is there to build almost anything. It’s really about how you design with that mind.

“When I was working in industrial design, it became really clear for me how important that is. Specifically, I remember one project working on a built-in oven … In this project, we specifically couldn’t change how you would interact with it. The user interface was already defined, and our task was to define how it looked. It became clear to me that I don’t want to exclude any one area, and it feels really unnatural to design a product but only worry about what it looks like and let somebody else worry about how it’s operated, or vice versa. Products in today’s world, especially, need to be thought about from all of these angles. You can’t really design a coffee maker anymore without thinking about the service that it might plug into or the systems that it connects to. You have to think about all of these things at the same time.”

Read more…

Comment

Architecting interactive environments

As our environments become increasing connected, architects must reinvent their roles and become hybrid designers.

Editor’s note: This is an excerpt by Erin Rae Hoffer from our recent book Designing for Emerging Technologies, a collection of works by several authors and edited by Jon Follett. This excerpt is included in our curated collection of chapters from the O’Reilly Design library. Download a free copy of the Designing for the Internet of Things ebook here.

We spend 90% of our lives indoors. The built environment has a huge impact on human health, social interaction, and our potential for innovation. In return, human innovation pushes our buildings continually in new directions as occupants demand the highest levels of comfort and functionality.

Our demand for pervasive connectivity has led us to weave the Internet throughout our lives, to insist that all spaces link us together with our handheld devices, and that all environments be interconnected. Internet-enabled devices creep into the spaces we inhabit, and these devices report back on spatial conditions, such as light, radiation, air quality, and temperature. They also count the number of people stopping at retail displays minute by minute, detect intruders and security breaches, and enable us to open locked doors remotely using our mobile devices; they allow us to modify the environments we occupy.

The space that surrounds us is transforming into a series of interconnected environments, forcing designers of space to rethink the role of architecture and the rules for its formulation. Similarly, designers of emerging technologies are rethinking the role of interfaces and the rules for creating them. During this period of experimentation and convergence, practical construction, and problem solving, architects must reinvent their roles and become hybrid designers, creating meaningful architecture with an awareness of the human implications of emerging technologies. Read more…

Comment

Meta-design: The intersection of art, design, and computation

Modern design products should be dynamic, adaptable systems built in code.

intersect_Bill_Ohl_Flickr

Editor’s note: this post originally appeared on Rune Madsen’s blog. It is reprinted here with permission.

This post is about something I see as a continuing trend in the design world: the rise of the meta-designer and algorithmic design systems.

“Meta-design is much more difficult than design; it’s easier to draw something than to explain how to draw it.” — Donald Knuth, The Metafont Book

Until recently, the term graphic designer was used to describe artists firmly rooted in the fine arts. Aspiring design students graduated with MFA degrees, and their curriculums were based on traditions taught by painting, sculpture, and architecture. Paul Rand once famously said: “It’s important to use your hands. This is what distinguishes you from a cow or a computer operator.” At best, this teaches the designer not to be dictated by their given tool. At worst, the designer is institutionalized to think of themselves as “ideators”: the direct opposite of a technical person.

This has obviously changed with the advent of computers (and the field of web design in particular), but not to the degree that one would expect. Despite recent efforts in defining digital-first design vocabularies, like Google’s Material Design, the legacy of the printed page is still omnipresent. Even the most adept companies are organized around principles inherited from desktop publishing, and, when the lines are drawn, we still have separate design and engineering departments. Products start as static layouts in the former and become dynamic implementations in the latter. Designers use tools modeled after manual processes that came way before the computer, while engineers work in purely text-based environments. I believe this approach to design will change in a fundamental way and, like Donald Knuth, I’ll call this the transition from design to meta-design. Read more…

Comment