FEATURED STORY

Understanding neural function and virtual reality

The O'Reilly Data Show Podcast: Poppy Crum explains that what matters is efficiency in identifying and emphasizing relevant data.

Neuron_like_trees_gomessda_flickr

Like many data scientists, I’m excited about advances in large-scale machine learning, particularly recent success stories in computer vision and speech recognition. But I’m also cognizant of the fact that press coverage tends to inflate what current systems can do, and their similarities to how the brain works.

During the latest episode of the O’Reilly Data Show Podcast, I had a chance to speak with Poppy Crum, a neuroscientist who gave a well-received keynote at Strata + Hadoop World in San Jose. She leads a research group at Dolby Labs and teaches a popular course at Stanford on Neuroplasticity in Musical Gaming. I wanted to get her take on AI and virtual reality systems, and hear about her experience building a team of researchers from diverse disciplines.

Understanding neural function

While it can sometimes be nice to mimic nature, in the case of the brain, machine learning researchers recognize that understanding and identifying the essential neural processes is much more critical. A related example cited by machine learning researchers is flight: wing flapping and feathers aren’t critical, but an understanding of physics and aerodynamics is essential.

Crum and other neuroscience researchers express the same sentiment. She points out that a more meaningful goal should be to “extract and integrate relevant neural processing strategies when applicable, but also identify where there may be opportunities to be more efficient.”

The goal in technology shouldn’t be to build algorithms that mimic neural function. Rather, it’s to understand neural function. … The brain is basically, in many cases, a Rube Goldberg machine. We’ve got this limited set of evolutionary building blocks that we are able to use to get to a sort of very complex end state. We need to be able to extract when that’s relevant and integrate relevant neural processing strategies when it’s applicable. We also want to be able to identify that there are opportunities to be more efficient and more relevant. I think of it as table manners. You have to know all the rules before you can break them. That’s the big difference between being really cool or being a complete heathen. The same thing kind of exists in this area. How we get to the end state, we may be able to compromise, but we absolutely need to be thinking about what matters in neural function for perception. From my world, where we can’t compromise is on the output. I really feel like we need a lot more work in this area. Read more…

Comment

Avoid design pitfalls in the IoT: Keep the focus on people

The O'Reilly Radar Podcast: Robert Brunner on IoT pitfalls, Ammunition, and the movement toward automation.

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.

Art_class_Paul_K_Flickr

For this week’s Radar Podcast, I had the opportunity to sit down with Robert Brunner, founder of the Ammunition design studio. Brunner talked about how design can help mitigate IoT pitfalls, what drove him to found Ammunition, and why he’s fascinated with design’s role in the movement toward automation.

Here are a few of the highlights from our chat:

One of the biggest pitfalls I’m seeing in how companies are approaching the Internet of Things, especially in the consumer market, is, literally, not paying attention to people — how people understand products and how they interact with them and what they mean to them.

It was this broader experience and understanding of what [a product] is and what it does in people’s lives, and what it means to them — that’s experienced not just through the thing, but how they learn about it, how they buy it, what happens when they open up the box, what happens when they use the product, what happens when the product breaks; all these things add up to how you feel about it and, ultimately, how you relate to a company. That was the foundation of [Ammunition].

Ultimately, I define design as the purposeful creation of things.

Read more…

Comment

Announcing BioCoder issue 8

BioCoder 8: neuroscience, robotics, gene editing, microbiome sequence analysis, and more.

Download a free copy of the new edition of BioCoder, our newsletter covering the biological revolution.

We are thrilled to announce the eighth issue of BioCoder. This marks two years of diverse, educational, and cutting-edge content, and this issue is no exception. Highlighted in this issue are technologies and tools that span neuroscience, diagnostics, robotics, gene editing, microbiome sequence analysis, and more.

Glen Martin interviewed Tim Marzullo, co-founder of Backyard Brains, to learn more about how their easy-to-use kits, like the RoboRoach, demonstrate how nervous systems work.

Marc DeJohn from Biomeme discusses their smartphone diagnostics technology for on-site gene detection of disease, biothreat targets, and much more.

Daniel Modulevsky, Charles Cuerrier, and Andrew Pelling from Pelling Lab at the University of Ottawa discuss different types of open source biomaterials for regenerative medicine and their use of de-cellularized apple tissue to generate 3D scaffolds for cells. If you follow their tutorial, you can do it, too!

aBioBot, highlighted by co-founder Raghu Machiraju, is a device that uses visual sensing and feedback to perform encodable laboratory tasks. Machiraju argues that “progress in biotechnology will come from the use of open user interfaces and open-specification middleware to drive and operate flexible robotic platforms.” Read more…

Comment

Moving toward a zero UI to orchestrate the IoT

The O'Reilly Radar Podcast: Andy Goodman on intangible interfaces, and Cory Doctorow on the DMCA.

maestro-33908_1280

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.

In this week’s Radar Podcast episode, O’Reilly’s Mac Slocum chats with Andy Goodman, group director of Fjord’s Design Strategy. Goodman talks about the shift away from screen-based interfaces to intangible interfaces, what he calls “zero UI.” He also addresses the evolutionary path of embeddables, noting that “we already have machines inside us.”

Here are a few of the highlights:

Sensing technologies are allowing us to distribute our computers around our bodies and around our environments, moving away from monolithic experiences, a single device, to an orchestration of devices all working together with us at the center.

Our visual sense is the most important to us, so taking that away [with zero UI] actually leaves us, in some ways, a bit more vulnerable to things going wrong — we can’t see what is an error state in a haptic experience…it’s possible that we’re setting ourselves a lot of design challenges that we don’t know we have to solve yet.

Read more…

Comment

The race to build ‘‘big data machines’’ in financial investing

Robot wealth managers and approaches will grow and offer alternative ways of investing.

NASDAQ_studio_Wikipedia

Get the O’Reilly Money Newsletter for news and insights about finance and technology.

Editor’s note: This post originally published in Big Data at Mary Ann Liebert, Inc., Publishers, in Volume 3, Issue 2, on June 18, 2015, under the title “Should You Trust Your Money to a Robot?” It is republished here with permission.

Financial markets emanate massive amounts of data from which machines can, in principle, learn to invest with minimal initial guidance from humans. I contrast human and machine strengths and weaknesses in making investment decisions. The analysis reveals areas in the investment landscape where machines are already very active and those where machines are likely to make significant inroads in the next few years.

Computers are making more and more decisions for us, and increasingly so in areas that require human judgment. Driverless cars, which seemed like science fiction until recently, are expected to become common in the next 10 years. There is a palpable increase in machine intelligence across the touchpoints of our lives, driven by the proliferation of data feeding into intelligent algorithms capable of learning useful patterns and acting on them. We are living through one of the greatest revolutions in our lifestyles, in which computers are increasingly engaged in our lives and decision-making, to a degree that it has become second nature. Recommendations on Amazon or auto-suggestions on Google are now so routine, we find it strange to encounter interfaces that don’t anticipate what we want. The intelligence revolution is well under way, with or without our conscious approval or consent. We are entering the era of intelligence as a service, with access to building blocks for building powerful new applications. Read more…

Comment

Augmenting the human experience: AR, wearable tech, and the IoT

As augmented reality technologies emerge, we must place the focus on serving human needs.

Register now for Solid Amsterdam, October 28, 2015 — space is limited.

Otto_is_going_to_fly_Wikipedia

Otto Lilienthal on August 16, 1894, with his “kleiner Schlagflügelapparat.”

Augmented reality (AR), wearable technology, and the Internet of Things (IoT) are all really about human augmentation. They are coming together to create a new reality that will forever change the way we experience the world. As these technologies emerge, we must place the focus on serving human needs.

The Internet of Things and Humans

Tim O’Reilly suggested the word “Humans” be appended to the term IoT. “This is a powerful way to think about the Internet of Things because it focuses the mind on the human experience of it, not just the things themselves,” wrote O’Reilly. “My point is that when you think about the Internet of Things, you should be thinking about the complex system of interaction between humans and things, and asking yourself how sensors, cloud intelligence, and actuators (which may be other humans for now) make it possible to do things differently.”

I share O’Reilly’s vision for the IoTH and propose we extend this perspective and apply it to the new AR that is emerging: let’s take the focus away from the technology and instead emphasize the human experience.

The definition of AR we have come to understand is a digital layer of information (including images, text, video, and 3D animations) viewed on top of the physical world through a smartphone, tablet, or eyewear. This definition of AR is expanding to include things like wearable technology, sensors, and artificial intelligence (AI) to interpret your surroundings and deliver a contextual experience that is meaningful and unique to you. It’s about a new sensory awareness, deeper intelligence, and heightened interaction with our world and each other. Read more…

Comment