Subscribe to the O’Reilly Design Podcast, our podcast exploring how experience design—and experience designers—are shaping business, the Internet of Things, and other domains.
In this weekâ€™s Design Podcast episode, I sit down with Chrissie Brodigan, manager of user experience research at GitHub. Brodigan will be be speaking at OReillyâ€™s inaugural Design Conference. In this episode, we talk about user research and product development at Github, and the blindspots in product development and organizational development.
Here are a few highlights from our conversation:
Our internal philosophy around research is about when we make our design decisions, we come up with hypotheses about how that design change will impact behavior as well as user experience. We may need to add a particular control to the workflow, but if it has a negative consequence on the overall experience of our users, we may decide that that’s not the right decision for us. Even if it’s helpful in one area, it causes unhappiness in another. We measure impact with controlled experiments, which a lot of people would refer to as â€˜AB testing.â€™ We do do some variance testing, which is short term, but we also do longitudinal analysis, which is to study a cohort over a longer period of time. Internally, we’re always asking ourselves â€˜Why?â€™
We typically follow three steps for research, where a product manager will request to research something. They have to come up with a problem statement. Whenever they request research we say, â€˜What’s the problem? Does it really require research?â€™ We treat research a little bit more like a luxury good, so we want to make sure if we’re going to devote time into a study, that we have something that we’re really trying to solve, not just research for researchâ€™s sake. Then, members from the research team will design a study, specifically for that particular product — so, we might do survey research, we might decide that we need to do a pre-release, or we might move over to a controlled experiment. Then, the last thing we do is we collaborate on results. We bring in the engineers, the designers, and the product manager, and if we’re interviewing customers, at times they’ll each take rotations, and they’ll go on those customer interviews.
I think the thing that people find the most interesting about how we do user research at GitHub is we, our team, our research team, uses GitHub to research. I guess what’s interesting about that is, I’m not a developer, so I had to learn Git and I had to learn how to use GitHub in order to bring research in closer to our code base. Everything has a URL because we use GitHub. Whenever we do a user interview, those notes are written up as a markdown file, opened up as a pull request, and a link to the video is part of that write up. That write up is then cross referenced over to the code base.
The other common mistake I see is that people skip pre-testing. You know, you’ve got a survey instrument, you’re really excited about it, or you’ve put together a variance test and you’re ready to roll, and you put it out there, and unfortunately when you do that and you don’t look at the data as a pre-test set, either you have reached out to too large of a sample and it’s hard to pull it back. Survey tools also sometimes have bugs.