Understanding neural function and virtual reality

The O'Reilly Data Show Podcast: Poppy Crum explains that what matters is efficiency in identifying and emphasizing relevant data.

Neuron_like_trees_gomessda_flickr

Like many data scientists, I’m excited about advances in large-scale machine learning, particularly recent success stories in computer vision and speech recognition. But I’m also cognizant of the fact that press coverage tends to inflate what current systems can do, and their similarities to how the brain works.

During the latest episode of the O’Reilly Data Show Podcast, I had a chance to speak with Poppy Crum, a neuroscientist who gave a well-received keynote at Strata + Hadoop World in San Jose. She leads a research group at Dolby Labs and teaches a popular course at Stanford on Neuroplasticity in Musical Gaming. I wanted to get her take on AI and virtual reality systems, and hear about her experience building a team of researchers from diverse disciplines.

Understanding neural function

While it can sometimes be nice to mimic nature, in the case of the brain, machine learning researchers recognize that understanding and identifying the essential neural processes is much more critical. A related example cited by machine learning researchers is flight: wing flapping and feathers aren’t critical, but an understanding of physics and aerodynamics is essential.

Crum and other neuroscience researchers express the same sentiment. She points out that a more meaningful goal should be to “extract and integrate relevant neural processing strategies when applicable, but also identify where there may be opportunities to be more efficient.”

The goal in technology shouldn’t be to build algorithms that mimic neural function. Rather, it’s to understand neural function. … The brain is basically, in many cases, a Rube Goldberg machine. We’ve got this limited set of evolutionary building blocks that we are able to use to get to a sort of very complex end state. We need to be able to extract when that’s relevant and integrate relevant neural processing strategies when it’s applicable. We also want to be able to identify that there are opportunities to be more efficient and more relevant. I think of it as table manners. You have to know all the rules before you can break them. That’s the big difference between being really cool or being a complete heathen. The same thing kind of exists in this area. How we get to the end state, we may be able to compromise, but we absolutely need to be thinking about what matters in neural function for perception. From my world, where we can’t compromise is on the output. I really feel like we need a lot more work in this area.

I’m not talking about user experience. I’m talking about how their system, the physiological system, handles the output. We experience the world in a state of illusion in many ways. … It’s a constant malleable interaction between context and weighting of different sensory information. An object, whether it’s a phone I’m holding or it’s a dog barking or an individual I’m talking to, they produce an image on the back of my retina that’s being translated through my brain. They’re producing a sound that’s being processed through my cochlea, and ultimately undergoing many different transformations. A smell and potentially a touch, and all that information has to be integrated to my holistic representation of that individual.

Subscribe to the O’Reilly Data Show Podcast

Stitcher, TuneIn, iTunes, SoundCloud, RSS

Research conducted by interdisciplinary teams

As someone who works in an industrial research lab, Crum works on projects that can end up impacting many users. To that end, she’s assembled and leads a team charged with technologies to improve overall user experience in many consumer products:

Dolby is actually at the intersection of sensory experience. A lot of our most relevant new technologies are currently happening in image processing…with high dynamic-range capabilities. We have a lot of development going on in imaging and voice and in audio. I come from a background in systems neuroscience. I’ve spent my time studying single cells and treating the brain as a computational circuit to ultimately get to the angle of perception. There’s a really good intersection for thinking about computational neuroscience and technology when you are systems neuroscientists or with a company like Dolby where the ultimate technology needs to be developed holistically with thought to multi-sensory, malleable user needs and experiences.

I really like to think the computer scientists, engineers, and mathematicians need to be on the three-legged race with the psychologists, cognitive scientists, and computational neuroscientists to get to where we need to get to for the ultimate user experience.

More data is not (always) better

As Crum pointed out to me in an email exchange: “Much success in our environment is dependent on how efficiently our physiological/perceptual systems get rid of information and help us have actionable robust experiences in the world. The current trend to throw more and more information into a single sensory modality is counter to our evolutionary efficiency as humans.” She gave a concrete example of this in our conversation:

If we think about augmented systems, augmented reality systems, there’s a lot of hype and push to give people as much information as can be captured and to somehow try to capture all the data that’s possible and let the human system deal with that information. The reality is that more data’s not better in these situations. Given what our brain is, how efficient I am in the world is highly dependent on how well my brain knows what to get rid of and what to emphasize.

You can listen to our entire interview in the SoundCloud player above, or subscribe through Stitcher, SoundCloud, TuneIn, or iTunes.

For more, watch Poppy Crum’s keynote at Strata+Hadoop World in San Jose. The forthcoming Hardcore Data Science day at Strata+Hadoop World in NYC will feature talks on large-scale machine learning from leading researchers and practitioners.

Cropped image on article and category pages by gomessda on Flickr, used under a Creative Commons license.


Fundamentals_of_Deep_Learning_CoverExpand your knowledge of deep learning with Fundamentals of Deep Learning, by Nikhil Buduma. If you have a basic understanding of what machine learning is, have familiarity with the Python programming language, and have some mathematical background with calculus, this book will help you get started.

tags: , , , , , , , ,