Doing Science on the Web (Alex Russell) — Minimizing harm to the ecosystem from experiments-gone-wrong […] This illustrates what happens when experiments inadvertently become critical infrastructure. It has happened before. Over, and over, and over again. Imma need therapy for the flashbacks. THE HORROR.
Virtual Time (Adrian Colyer) — applying special relativity to distributed systems. Contains lines like: All messages sent explicitly by user programs have a positive (+) sign; their antimessages have a negative (-) sign. Whenever a process sends a message, what actually happens is that a faithful copy of the message is transmitted to the receiver’s input queue, and a negative copy, the antimessage, is retained in the sender’s output queue for use in case the sender rolls back. Curl up with your intoxicant of choice and prepare to see the colour of infinity.
Lessons Learned from Reading Postmortems — (of the software kind) Except in extreme emergencies, risky code changes are basically never simultaneously pushed out to all machines because of the risk of taking down a service company-wide. But it seems that every company has to learn the hard way that seemingly benign config changes can also cause a company-wide service outage.
194 Chinese Robot Companies (Robohub) — Overall, 107 Chinese companies are involved in industrial robotics. Many of these new industrial robot makers are producing products that, because of quality, safety, and design regulations, will only be acceptable to the Chinese market. Many interesting numbers about the Chinese robotics biz.
Possible Economics Models (Jamais Cascio) — economic futures filtered through Doctorovian prose. Griefer Economics: Information is power, especially when it comes to finance, and the increasing use of ultra-fast computers to manipulate markets (and drive out “weaker” competitors) is moving us into a world where market position isn’t determined by having the best offering, but by having the best tool. Rules are gamed, opponents are beaten before they even know they’re playing, and it all feels very much like living on a PvP online game server where the referees have all gone home. Relevant to Next:Economy.
War in Space May Be Closer Than Ever (SciAm) — Today, the situation is much more complicated. Low- and high-Earth orbits have become hotbeds of scientific and commercial activity, filled with hundreds upon hundreds of satellites from about 60 different nations. Despite their largely peaceful purposes, each and every satellite is at risk, in part because not all members of the growing club of military space powers are willing to play by the same rules — and they don’t have to, because the rules remain as yet unwritten. There’s going to be a bitchin’ S-1 risks section when Planet Labs files for IPO.
Not Even Close: The State of Computer Security (Vimeo) — In this bleak, relentlessly morbid talk, James Mickens will describe why making computers secure is an intrinsically impossible task. He will explain why no programming language makes it easy to write secure code. He will then discuss why cloud computing is a black hole for privacy, and only useful for people who want to fill your machine with ads, viruses, or viruses that masquerade as ads. At this point in the talk, an audience member may suggest that bitcoins can make things better. Mickens will laugh at this audience member and then explain why trusting the bitcoin infrastructure is like asking Dracula to become a vegan. Mickens will conclude by describing why true love is a joke and why we are all destined to die alone and tormented. The first ten attendees will get balloon animals, and/or an unconvincing explanation about why Mickens intended to (but did not) bring balloon animals. Mickens will then flee on horseback while shouting “The Prince of Lies escapes again!”
Algorithms and Bias (NYTimes) — interview w/Cynthia Dwork from Microsoft Research. Fairness means that similar people are treated similarly. A true understanding of who should be considered similar for a particular classification task requires knowledge of sensitive attributes, and removing those attributes from consideration can introduce unfairness and harm utility.
Open the Music Industry’s Black Box (NYT) — David Byrne talks about the opacity of financials of streaming and online music services (including/especially YouTube). Caught my eye: The labels also get money from three other sources, all of which are hidden from artists: They get advances from the streaming services, catalog service payments for old songs, and equity in the streaming services themselves. (via BoingBoing)
Deloitte Changing Performance Reviews (HBR) — “Although it is implicitly assumed that the ratings measure the performance of the ratee, most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus, ratings reveal more about the rater than they do about the ratee.”
Data-flow Graphing in Python (Matt Keeter) — not shared because data-flow graphing is sexy new hot topic that’s gonna set the world on fire (though, I bet that’d make Matt’s day), but because there are entire categories of engineering and operations migraines that are caused by not knowing where your data came from or goes to, when, how, and why. Remember Wirth’s “algorithms + data structures = programs”? Data flows seem like a different slice of “programs.” Perhaps “data flow + typos = programs”?
Japan’s Robot Hotel is Serious Business (Engadget) — hotel was architected to suit robots: For the porter robots, we designed the hotel to include wide paths.” Two paths slope around the hotel lobby: one inches up to the second floor, while another follows a gentle decline to guide first-floor guests (slowly, but with their baggage) all the way to their room. Makes sense: at Solid, I spoke to a chap working on robots for existing hotels, and there’s an entire engineering challenge in navigating an elevator that you wouldn’t believe.
bokken — GUI to help open source reverse engineering for code.
Buzz: An Extensible Programming Language for Self-Organizing Heterogeneous Robot Swarms (arXiv) — Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as Robot Operating System.
Visualising GoogleNet Classes — fascinating to see squirrel monkeys and basset hounds emerge from nothing. It’s so tempting to say, “this is what the machine sees in its mind when it thinks of basset hounds,” even though Boring Brain says, “that’s bollocks and you know it!”
A Sort of Joy — MOMA’s catalogue was released under CC license, and has even been used to create new art. The performance is probably NSFW at your work without headphones on, but is hilarious. Which I never thought I’d say about a derivative work of a museum catalogue. (via Courtney Johnston)
Japanese Telcos vie for Consumer Robot-as-a-Service Business (Robohub) — NTT says Sota will be deployed in seniors’ homes as early as next March, and can be connected to medical devices to help monitor health conditions. This plays well with Japanese policy to develop and promote technological solutions to its aging population crisis.
Kalman Filter — an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone.
Interview with Bruce Sterling — Singapore is like a science fictional society without the fiction. Dubai is like a science fictional society without the science. […] Robots just don’t want to live. They’re inventions, not creatures; they don’t have any appetites or enthusiasms. I don’t think they’d maintain themselves very long without our relentlessly pushing them uphill against their own lifeless entropy. They’re just not entities in the same sense that we are entities; they don’t have much skin in our game. They don’t care and they can’t be bothered. We don’t yet understand how and why we ourselves care and bother, so we’d be hard put to install that capacity inside our robot vacuum cleaners.
Japan’s Robot Revolution — Fugitt said Japan’s weakness was in application and deployment of its advanced technologies. “The Japanese expect other countries and people to appreciate their technology, but they’re inwardly focused. If it doesn’t make sense to them, they typically don’t do it,” he said, citing the example of Japanese advanced wheelchairs having 100 kilogram weight limits. […] South Korea could be a threat [to Japan’s lead in robotics] if the chaebol opened up [and shared technologies], but I don’t see it happening. The U.S. will come in and disrupt things; they’ll cause chaos in a particular market and then run away.
A World Without Work (The Atlantic) — In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” […] Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5% of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications.
Google Patenting Machine Learning Developments (Reddit) — I am afraid that Google has just started an arms race, which could do significant damage to academic research in machine learning. Now it’s likely that other companies using machine learning will rush to patent every research idea that was developed in part by their employees. We have all been in a prisoner’s dilemma situation, and Google just defected. Now researchers will guard their ideas much more combatively, given that it’s now fair game to patent these ideas, and big money is at stake.
Machine Ethics (Nature) — machine learning ethics versus rule-driven ethics. Logic is the ideal choice for encoding machine ethics, argues Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon. “Logic is how we reason and come up with our ethical choices,” he says. I disagree with his premises.