- Viv — another step in the cognition race. Wolfram Alpha was first out the gate, but Watson, Viv, and others are hot on heels of being able to parse complex requests, then seek and use information to fulfil them.
- Universal Mobile Electrochemical Detector Designed for Use in Resource-limited Applications (PNAS) — $35 handheld sensor with mobile phone connection. The electrochemical methods that we demonstrate enable quantitative, broadly applicable, and inexpensive sensing with flexibility based on a wide variety of important electroanalytical techniques (chronoamperometry, cyclic voltammetry, differential pulse voltammetry, square wave voltammetry, and potentiometry), each with different uses. Four applications demonstrate the analytical performance of the device: these involve the detection of (i) glucose in the blood for personal health, (ii) trace heavy metals (lead, cadmium, and zinc) in water for in-field environmental monitoring, (iii) sodium in urine for clinical analysis, and (iv) a malarial antigen (Plasmodium falciparum histidine-rich protein 2) for clinical research. (via BoingBoing)
- panamax.io — containerized app creator with an open-source app marketplace hosted in GitHub. Panamax provides a friendly interface for users of Docker, Fleet & CoreOS. With Panamax, you can easily create, share and deploy any containerized app no matter how complex it might be.
- Quid Analysis of Comments to FCC on Net Neutrality (NPR) — visualising the themes and volume of the comments. Interesting factoid: only half the comments were derived from templates (cf 80% in submissions to some financial legislation).
Some of AI's viable approaches lie outside the organizational boundaries of Google and other large Internet companies.
Editor’s note: this post is part of an ongoing series exploring developments in artificial intelligence.
Here’s a simple recipe for solving crazy-hard problems with machine intelligence. First, collect huge amounts of training data — probably more than anyone thought sensible or even possible a decade ago. Second, massage and preprocess that data so the key relationships it contains are easily accessible (the jargon here is “feature engineering”). Finally, feed the result into ludicrously high-performance, parallelized implementations of pretty standard machine-learning methods like logistic regression, deep neural networks, and k-means clustering (don’t worry if those names don’t mean anything to you — the point is that they’re widely available in high-quality open source packages).
Google pioneered this formula, applying it to ad placement, machine translation, spam filtering, YouTube recommendations, and even the self-driving car — creating billions of dollars of value in the process. The surprising thing is that Google isn’t made of magic. Instead, mirroring Bruce Scheneier’s surprised conclusion about the NSA in the wake of the Snowden revelations, “its tools are no different from what we have in our world; it’s just better funded.” Read more…