"deep learning" entries

Four Short Links: 14 April 2016

Four Short Links: 14 April 2016

New Statesmen, Autonomous Vehicle Reliability, Conversational Software, and TensorFlow Playground

  1. Tech CEOs Cast Themselves as the New Statesmen (Buzzfeed) — the logical consequence of the corporation replacing elected government as the most efficacious unit of organization.
  2. How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability? (RAND) — it may not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. In parallel to developing new testing methods, it is imperative to develop adaptive regulations that are designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.
  3. We Don’t Know How to Build Conversational Software — current brand-driven conversations are deeply underwhelming (phone trees with more typing is dystopic shopping), but I don’t know that we need to solve general AI for chatbots to provide an illusion of utility.
  4. TensorFlow Playgroundtinker with a neural network right here in your browser. Don’t worry, you can’t break it. We promise.
Four short links: 8 April 2016

Four short links: 8 April 2016

Data Security, Bezos Letter, Working Remote, and Deep Learning Book

  1. LangSecThe complexity of our computing systems (both software and hardware) have reached such a degree that data must treated as formally as code.
  2. Bezos’s Letter to Shareholders — as eloquent about success in high-risk tech as Warren Buffett is about success in value investing.
  3. Good Bad and Ugly of Working Remote After 5 Years — good advice, and some realities for homeworkers to deal with.
  4. Deep Learning Book — text finished, prepping print production via MIT Press. Why are you using HTML format for the drafts? This format is a sort of weak DRM required by our contract with MIT Press. It’s intended to discourage unauthorized copying/editing of the book.
Four short links: 6 April 2016

Four short links: 6 April 2016

Hi-Techtiles, Recreating 3D, Mobile Deep Learning, and Correlation Games

  1. U.S. Textile Industry Turns to Tech as Gateway to RevivalWarwick Mills is joining the Defense Department, universities including the Massachusetts Institute of Technology, and nearly 50 other companies in an ambitious $320 million project to push the American textile industry into the digital age. Key to the plan is a technical ingredient: embedding a variety of tiny semiconductors and sensors into fabrics that can see, hear, communicate, store energy, warm or cool a person, or monitor the wearer’s health.
  2. 2D to 3D With Deep CNNs (PDF) — source code on github.
  3. Squeezing AI into Mobile Systems (IEEE Spectrum) — Sze, working with Joel Emer, also an MIT computer science professor and senior distinguished research scientist at Nvidia, developed Eyeriss­, the first custom chip designed to run a state-of-the-art convolutional neural network. They showed they could run AlexNet, a particularly demanding algorithm, using less than one-tenth the energy of a typical mobile GPU: instead of consuming 5 to 10 watts, Eyeriss used 0.3 W.
  4. The 8-Bit Game That Makes Statistics Addictive (The Atlantic) — that game is Guess The Correlation. “As a researcher, you read papers and a lot of the time, you eyeball the figures without even reading the text,” he says. “You see a plot—it could even be your own plot—and make a judgment based on it. Contrary to what people believe, they’re not very good at this. And I have the data to prove that.”
Four short links: 30 March 2016

Four short links: 30 March 2016

Deep Babbage, Supervisors in Go, Brittle Code, and Quantum NLP

  1. Deep Learning for Analytical EngineThis repository contains an implementation of a convolutional neural network as a program for Charles Babbage’s Analytical Engine, capable of recognizing handwritten digits to a high degree of accuracy (98.5% if provided with a sufficient amount of training data and left running sufficiently long).
  2. Supervisor Trees in GoA well-structured Erlang program is broken into multiple independent pieces that communicate via messages, and when a piece crashes, the supervisor of that piece automatically restarts it. […] Even as I have been writing suture, I have on occasion been astonished to flip my screen over to the console of Go program I’ve written with suture, and been surprised to discover that it’s actually been merrily crashing away during my manual testing, but soldiering on so well I didn’t even know.
  3. How to Avoid Brittle CodeIf it hurts, do it more often.
  4. Developing Quantum Annealer Driven Data Discovery (Joseph Dulny III, Michael Kim) — In this paper, we gain novel insights into the application of quantum annealing (QA) to machine learning (ML) through experiments in natural language processing (NLP), seizure prediction, and linear separability testing.
Four short links: 21 March 2016

Four short links: 21 March 2016

Legacy Tech, Gender Prediction, Text Generation, and Human Performance

  1. Ten More Years!my brand spanking new chip card from a UK issuer not only arrived with a 2000s app of a 1990s implementation of a 1980s product (debit) on 1970s chip, it also came with a 1960s magnetic stripe on it and a 1950s PAN with a 1940s signature panel on the back. It’s no wonder it seems a little out of place in the modern world.
  2. Age and Gender Classification Using Convolutional Neural Nets — oh, this will end well.
  3. The Uncanny Valley of Words (Ross Goodwin) — lessons learned from an NYU ITP neural networker making poetry and surprises from text.
  4. The Paradox of Human Performance (YouTube) — Human dexterity and agility vastly exceed that of contemporary robots. Yet, humans have vastly slower “hardware” (e.g. muscles) and “wetware” (e.g. neurons). How can this paradox be resolved? Slow actuators and long communication delays require predictive control based on some form of internal model—but what form? (via Robohub)
Four short links: 11 March 2016

Four short links: 11 March 2016

Deep-Learning Catan, Scala Tutorials, Legal Services, and Shiny Echo

  1. Strategic Dialogue Management via Deep Reinforcement Learning (Adrian Colyer) — a neural network learns to play Settlers of Catan. Is nothing sacred?
  2. scala school — Twitter’s instructional material for coming up to speed on scala.
  3. Robin Hood Fellowship — fellowship to use technology to increase access to legal services for New Yorkers. Stuff that matters.
  4. The Echo From Amazon Brims With Groundbreaking Promise (NY Times) — A bit more than a year after its release, the Echo has morphed from a gimmicky experiment into a device that brims with profound possibility. The longer I use it, the more regularly it inspires the same sense of promise I felt when I used the first iPhone — a sense this machine is opening up a vast new realm in personal computing, and gently expanding the role that computers will play in our future.
Four short links: 10 March 2016

Four short links: 10 March 2016

Cognitivist and Behaviourist AI, Math and Social Computing, A/B Testing Stats, and Rat Cyborgs are Smarter

  1. Crossword-Solving Neural NetworksHill describes recent progress in learning-based AI systems in terms of behaviourism and cognitivism: two movements in psychology that effect how one views learning and education. Behaviourism, as the name implies, looks at behaviour without looking at what the brain and neurons are doing, while cognitivism looks at the mental processes that underlie behaviour. Deep learning systems like the one built by Hill and his colleagues reflect a cognitivist approach, but for a system to have something approaching human intelligence, it would have to have a little of both. “Our system can’t go too far beyond the dictionary data on which it was trained, but the ways in which it can are interesting, and make it a surprisingly robust question and answer system – and quite good at solving crossword puzzles,” said Hill. While it was not built with the purpose of solving crossword puzzles, the researchers found that it actually performed better than commercially-available products that are specifically engineered for the task.
  2. Mathematical Foundations for Social Computing (PDF) — collection of pointers to existing research in social computing and some open challenges for work to be done. Consider situations where a highly structured decision must be made. Some examples are making budgets, assigning water resources, and setting tax rates. […] One promising candidate is “Knapsack Voting.” […] This captures most budgeting processes — the set of chosen budget items must fit under a spending limit, while maximizing societal value. Goel et al. prove that asking users to compare projects in terms of “value for money” or asking them to choose an entire budget results in provably better properties than using the more traditional approaches of approval or rank-choice voting.
  3. Power, Minimal Detectable Effect, and Bucket Size Estimation in A/B Tests (Twitter) — This post describes how Twitter’s A/B testing framework, DDG, addresses one of the most common questions we hear from experimenters, product managers, and engineers: how many users do we need to sample in order to run an informative experiment?
  4. Intelligence-Augmented Rat Cyborgs in Maze Solving (PLoS) — We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in 14 diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.
Four short links: 8 March 2016

Four short links: 8 March 2016

Neural Nets on Encrypted Data, IoT VR Prototype, Group Chat Considered Harmful, and Haptic Hardware

  1. Neutral Nets on Encrypted Data (Paper a Day) — By using a technique known as homohorphic encryption, it’s possible to perform operations on encrypted data, producing an encrypted result, and then decrypt the result to give back the desired answer. By combining homohorphic encryption with a specially designed neural network that can operate within the constraints of the operations supported, the authors of CryptoNet are able to build an end-to-end system whereby a client can encrypt their data, send it to a cloud service that makes a prediction based on that data – all the while having no idea what the data means, or what the output prediction means – and return an encrypted prediction to the client, which can then decrypt it to recover the prediction. As well as making this possible, another significant challenge the authors had to overcome was making it practical, as homohorphic encryption can be expensive.
  2. VR for IoT Prototype (YouTube) — a VR prototype created for displaying sensor data and video streaming in real time from IoT sensors/camera devices designed for rail or the transportation industry.
  3. Is Group Chat Making You Sweat? (Jason Fried) — all excellent points. Our attention and focus are the scarce and precious resources of the 21st century.
  4. How Devices Provide Haptic Feedback — good intro to what’s happening in your hardware.
Four short links: 1 March 2016

Four short links: 1 March 2016

Phone Kit, Circular Phone, TensorFlow Intro, and Change Motivation

  1. Seeed RePhoneopen source and modular phone kit.
  2. Cyrcle — prototype round phone, designed by women for women. It’s clearly had a bit more thought put into it than the usual “pink it and shrink it” approach … circular to fit in smaller and shaped pockets, and software features strict notification controls: the device would only alert you to messages or updates from an inner circle.
  3. TensorFlow for Poets (Pete Warden) — I want to show how anyone with a Mac laptop and the ability to use the Terminal can create their own image classifier using TensorFlow, without having to do any coding.
  4. Finding the Natural Motivation for Change (Pia Waugh) — you can force certain behaviour changes through punishment or reward, but if people aren’t naturally motivated to make the behaviour change themselves then the change will be both unsustainable and minimally implemented. Amen!
Four short links: 26 February 2016

Four short links: 26 February 2016

High-Performing Teams, Location Recognition, Assessing Computational Thinking, and Values in Practice

  1. What Google Learned From Its Quest to Build the Perfect Team (NY Times) — As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion […] Second, the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions, and other nonverbal cues.
  2. Photo Geolocation with Convolutional Neural Networks (arXiv) — 377MB gets you a neural net, trained on geotagged Web images, that can suggest location of the image. From MIT TR’s coverage: To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6% of the images at street-level accuracy and 10.1% at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4% of the photos and the continent in 48.0% of them.
  3. Assessing the Development of Computational Thinking (Harvard) — we have relied primarily on three approaches: (1) artifact-based interviews, (2) design scenarios, and (3) learner documentation. (via EdSurge)
  4. Values in Practice (Camille Fournier) — At some point, I realized there was a pattern. The people in the company who were beloved by all, happiest in their jobs, and arguably most productive, were the people who showed up for all of these values. They may not have been the people who went to the best schools, or who wrote the most beautiful code; in fact, they often weren’t the “on-paper” superstars. But when it came to the job, they were great, highly in-demand, and usually promoted quickly. They didn’t all look the same, they didn’t all work in the same team or have the same skill set. Their only common thread was that they didn’t have to stretch too much to live the company values because the company values overlapped with their own personal values.