Sirius — UMich open source “intelligent Personal Assistant” (aka Siri, Cortana, Google Now, etc.). Text recognition, image recognition, query processing components. They hope it’ll be a focal point for research in the area, the way that open source operating systems have focused university research.
MIT DragonBot Evolving to Teach Kids (IEEE Spectrum) — they’re moving from “Wizard of Oz” (humans-behind-the-scenes) control to autonomous operation. Lovely example of Flintstoning in a robotics context.
Personal Assistants Coming (Robohub) — 2015 is the year physical products will be coming to market and available for experimentation and testing. Pepper ships in the summer in Japan, JIBO ships preorders in Q3, as does Cubic in the fall and EmoSpark in the summer. […]The key to the outcome of this race is whether a general purpose AI will be able to steer people through their digital world, or whether users would rather navigate to applications that are specialists (such as American Airlines or Dominos Pizza).
There Is No Now — One of the most important results in the theory of distributed systems is an impossibility result, showing one of the limits of the ability to build systems that work in a world where things can fail. This is generally referred to as the FLP result, named for its authors, Fischer, Lynch, and Paterson. Their work, which won the 2001 Dijkstra Prize for the most influential paper in distributed computing, showed conclusively that some computational problems that are achievable in a “synchronous” model in which hosts have identical or shared clocks are impossible under a weaker, asynchronous system model.
Deep Learning Hardware Guide — One of the worst things you can do when building a deep learning system is to waste money on hardware that is unnecessary. Here I will guide you step by step through the hardware you will need for a cheap high performance system.
Designing the Human-Robot Relationship (O’Reilly) — We can use those same principles [Jakob Nielsen’s usability heuristics] and look for implications of robots serving our higher ordered needs, as we move from serving needs related to convenience or performance to actually supporting our decision making to emerging technologies, moving from being able to do anything or be magic in terms of the user interface to being more human in the user interface.
Why Are Geospatial Databases So Hard To Build? — Algorithms in computer science, with rare exception, leverage properties unique to one-dimensional scalar data models. In other words, data types you can abstractly represent as an integer. Even when scalar data types are multidimensional, they can often be mapped to one dimension. This works well, as the majority of [what] data people care about can be represented with scalar types. If your data model is inherently non-scalar, you enter an algorithm wasteland in the computer science literature.
The Web’s Grain (Frank Chimero) — What would happen if we stopped treating the web like a blank canvas to paint on, and instead like a material to build with?
Bruce Sterling on Convergence of Humans and Machines — I like to use the terms “cognition” and “computation”. Cognition is something that happens in brains, physical, biological brains. Computation is a thing that happens with software strings on electronic tracks that are inscribed out of silicon and put on fibre board. They are not the same thing, and saying that makes the same mistake as in earlier times, when people said that human thought was like a steam engine.
Smart Pocket Watch — I love to see people trying different design experiences. This is beautiful. And built on Firefox OS!
Knowledge-Based Trust (PDF) — Google research paper on how to assess factual accuracy of web page content. It was bad enough when Google incentivised people to make content-free pages. Next there’ll be a reward for scamming bogus facts into Google’s facts database.
Machine Learning Done Wrong — When dealing with small amounts of data, it’s reasonable to try as many algorithms as possible and to pick the best one since the cost of experimentation is low. But as we hit “big data,” it pays off to analyze the data upfront and then design the modeling pipeline (pre-processing, modeling, optimization algorithm, evaluation, productionization) accordingly.
Ten Simple Rules for Lifelong Learning According to Richard Hamming (PLoScompBio) — Exponential growth of the amount of knowledge is a central feature of the modern era. As Hamming points out, since the time of Isaac Newton (1642/3-1726/7), the total amount of knowledge (including but not limited to technical fields) has doubled about every 17 years. At the same time, the half-life of technical knowledge has been estimated to be about 15 years. If the total amount of knowledge available today is x, then in 15 years the total amount of knowledge can be expected to be nearly 2x, while the amount of knowledge that has become obsolete will be about 0.5x. This means that the total amount of knowledge thought to be valid has increased from x to nearly 1.5x. Taken together, this means that if your daughter or son was born when you were 34 years old, the amount of knowledge she or he will be faced with on entering university at age 17 will be more than twice the amount you faced when you started college.
Apache NiFi — incubated open source project for data flow.
Tug Hospital Robot (Wired) — It may have an adult voice, but Tug has a childlike air, even though in this hospital you’re supposed to treat it like a wheelchair-bound old lady. It’s just so innocent, so earnest, and at times, a bit helpless. If there’s enough stuff blocking its way in a corridor, for instance, it can’t reroute around the obstruction. This happened to the Tug we were trailing in pediatrics. “Oh, something’s in its way!” a woman in scrubs says with an expression like she herself had ruined the robot’s day. She tries moving the wheeled contraption but it won’t budge. “Uh, oh!” She shoves on it some more and finally gets it to move. “Go, Tug, go!” she exclaims as the robot, true to its programming, continues down the hall.
Improving the Robustness of Complex Networks with Preserving Community Structure (PLoSone) — To improve robustness while minimizing the above three costly changes, we first seek to verify that the community structure of networks actually do identify the robustness and vulnerability of networks to some extent. Then, we propose an effective 3-step strategy for robustness improvement, which retains the degree distribution of a network, as well as preserves its community structure.
Update on indie.vc — We’ve worked with the team at Cooley to create an investment instrument that has elements of both debt and equity. Debt in that we will not be purchasing equity initially, but, unlike debt, there is no maturity date, no collateralization of assets and no recourse if it’s never paid back. The equity element will only become a factor if the participating company chooses to raise a round of financing or sell out to an acquiring company. We don’t have a clever acronym or name for this instrument yet, but I’m sure we’ll come up with something great.
How Nathan Barley Came True (Guardian) — if you haven’t already seen Nathan Barley, you should. It’s by the guy who did Black Mirror, and it’s both awful and authentic and predictive and retro and … painfully accurate about the horrors of our Internet/New Media industry. (via BoingBoing)
Trust Engineers (Radio Lab) — Facebook has a created a laboratory of human behavior the likes of which we’ve never seen. We peek into the work of Arturo Bejar and a team of researchers who are tweaking our online experience, bit by bit, to try to make the world a better place. Radio show of goodness. (via Flowing Data)
DARPA’S Haptix Project — The goal of the HAPTIX program is to provide amputees with prosthetic limb systems that feel and function like natural limbs, and to develop next-generation sensorimotor interfaces to drive and receive rich sensory content from these limbs. Today it’s prosthetic limbs for amputees, but within five years it’ll be augmented ad-driven realities for virtual currency ambient social recommendations.
Real World Active Learning — the point at which algorithms fail is precisely where there’s an opportunity to insert human judgment to actively improve the algorithm’s performance. An O’Reilly report with CrowdFlower.
Hearing With Your Tongue (BoingBoing) — The tongue contains thousands of nerves, and the region of the brain that interprets touch sensations from the tongue is capable of decoding complicated information. “What we are trying to do is another form of sensory substitution,” Williams said.
Making Wrong Code Look Wrong (Joel Spolsky) — This makes mistakes even more visible. Your eyes will learn to “see” smelly code, and this will help you find obscure security bugs just through the normal process of writing code and reading code.
Simple Testing Can Prevent Most Critical Failures — We found the majority of catastrophic failures could easily have been prevented by performing simple testing on error handling code – the last line of defense – even without an understanding of the software design. We extracted three simple rules from the bugs that have lead to some of the catastrophic failures, and developed a static [Java] checker, Aspirator, capable of locating these bugs. One of the tests is a FIXME or TODO in an exception handler.
Quantum Machine Learning Algorithms: Read the Fine Print (Scott Aaronson) — In the years since HHL, quantum algorithms achieving “exponential speedups over classical algorithms” have been proposed for other major application areas […]. With each of them, one faces the problem of how to load a large amount of classical data into a quantum computer (or else compute the data “on-the-fly”), in a way that is efficient enough to preserve the quantum speedup.
Global Forecast System — National Weather Service open sources its weather forecasting software. Hope you have a supercomputer and all the data to make use of it …
High-reproducibility and high-accuracy method for automated topic classification — Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
The growing role of software architects: “Architecture has become much more interesting now because it’s become more encompassing," says Neal Ford, software architect and meme wrangler at ThoughtWorks.