If you really want to understand the effect data is having, you need the models.
Writing my post about AI and summoning the demon led me to re-read a number of articles on Cathy O’Neil’s excellent mathbabe blog. I highlighted a point Cathy has made consistently: if you’re not careful, modelling has a nasty way of enshrining prejudice with a veneer of “science” and “math.”
Cathy has consistently made another point that’s a corollary of her argument about enshrining prejudice. At O’Reilly, we talk a lot about open data. But it’s not just the data that has to be open: it’s also the models. (There are too many must-read articles on Cathy’s blog to link to; you’ll have to find the rest on your own.)
You can have all the crime data you want, all the real estate data you want, all the student performance data you want, all the medical data you want, but if you don’t know what models are being used to generate results, you don’t have much. Read more…
Does the way a brain is wired determine how we think and behave? Recent research points to a resounding yes.
One of the age-old questions has been whether the way a brain is wired, negating other attributes such as intracellular systems biology, will give rise to how we think and how we behave. We are not at the point yet to answer that question regarding the human brain. However, by using the well-mapped connectome of the nematode Caenorhabditis elegans (C. elegans, shown above), we were able to answer this question as a resounding yes, at least for simpler animals. Using a simple robot (a Lego Mindstorms EV3) and connecting sensors on the robot to stimulate specific simulated sensory neurons in an artificial connectome, and condensing worm muscle excitation to move a left and right motor on the robot, we observed worm-like behaviors in the robot based purely on environmental factors. Read more…
Claire Rowland on interoperability, networks, and latency.
The Internet of Things (IoT) is challenging designers to rethink their craft. I recently sat down with Claire Rowland, independent designer and author of the forthcoming book Designing Connected Products to talk about the changing design landscape.
During our interview, Rowland brought up three points that resonated with me.
Interoperability and the Internet of Things
This is an IoT issue that affects everyone — engineers, designers, and consumers alike. Rowland recalled a fitting quote she’d once heard to describe the standards landscape: “Standards are like toothbrushes, everyone knows you need one, but nobody wants to use anybody else’s.”
Designers, like everyone else involved with the Internet of Things, will need equal amounts of patience and agility as the standards issue works itself out. Read more…
We could soon have lab-grown hamburgers, not in the $300,000 range but in the $10 range — would you eat one?
That was the call I got from a scientist entrepreneur friend of mine, John Schloendorn, the CEO of Gene and Cell Technologies. He’d been working on potential regenerative medicine therapies and tinkering with bioreactors to grow human cell lines. He left the lab for the weekend, and then something went wrong with one of his bioreactors: something got stuck in it.
“So, I was wondering what happened with my bioreactor and how this big chunk of plastic had gotten in there and ruined my cytokine production run. I was pulling it out, and I thought it was was weird because it was floppy. I threw it in the garbage. A little later, after thinking about it, I realized it wasn’t plastic and pulled it out of the garbage.” Read more…
A humanist approach to automation.
Editor’s note: At some point, we’ve all read the accounts in newspapers or on blogs that “human error” was responsible for a Twitter outage, or worse, a horrible accident. Automation is often hailed as the heroic answer, poised to eliminate the specter of human error. This guest post from Steven Shorrock, who will be delivering a keynote speech at Velocity in Barcelona, exposes human error as dangerous shorthand. The more nuanced way through involves systems thinking, marrying the complex fabric of humans and the machines we work with every day.
In Kurt Vonnegut’s dystopian novel ‘Player Piano’, automation has replaced most human labour. Anything that can be automated, is automated. Ordinary people have been robbed of their work, and with it purpose, meaning and satisfaction, leaving the managers, scientists and engineers to run the show. Dr Paul Proteus is a top manager-engineer at the head of the Ilium Works. But Proteus, aware of the unfairness of the situation for the people on the other side of the river, becomes disillusioned with society and has a moral awakening. In the penultimate chapter, Paul and his best friend Finnerty, a brilliant young engineer turned rogue-rebel, reminisce sardonically: “If only it weren’t for the people, the goddamned people,” said Finnerty, “always getting tangled up in the machinery. If it weren’t for them, earth would be an engineer’s paradise.”
We need to understand our own intelligence is competition for our artificial, not-quite intelligences.
A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.
There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.
The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%. Read more…