"deep learning" entries

Cheap sensors, fast networks, and distributed computing

The history of computing has been a constant pendulum — that pendulum is now swinging back toward distribution.

Editor’s note: this is an excerpt from our new report Data: Emerging Trends and Technologies, by Alistair Croll. You can download the free report here.

The trifecta of cheap sensors, fast networks, and distributing computing are changing how we work with data. But making sense of all that data takes help, which is arriving in the form of machine learning. Here’s one view of how that might play out.

Clouds, edges, fog, and the pendulum of distributed computing

The history of computing has been a constant pendulum, swinging between centralization and distribution.

The first computers filled rooms, and operators were physically within them, switching toggles and turning wheels. Then came mainframes, which were centralized, with dumb terminals.

As the cost of computing dropped and the applications became more democratized, user interfaces mattered more. The smarter clients at the edge became the first personal computers; many broke free of the network entirely. The client got the glory; the server merely handled queries.

Once the web arrived, we centralized again. LAMP (Linux, Apache, MySQL, PHP) buried deep inside data centers, with the computer at the other end of the connection relegated to little more than a smart terminal rendering HTML. Load-balancers sprayed traffic across thousands of cheap machines. Eventually, the web turned from static sites to complex software as a service (SaaS) applications.

Then the pendulum swung back to the edge, and the clients got smart again. First with AJAX, Java, and Flash; then in the form of mobile apps, where the smartphone or tablet did most of the hard work and the back end was a communications channel for reporting the results of local action. Read more…

Comment
Four short links: 15 December 2014

Four short links: 15 December 2014

Transferable Learning, At-Scale Telemetry, Ugly DRM, and Fast Packet Processing

  1. How Transferable Are Features in Deep Neural Networks? — (answer: “very”). A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. (via Pete Warden)
  2. Introducing Atlas: Netflix’s Primary Telemetry Platform — nice solution to the problems that many have, at a scale that few have.
  3. The Many Facades of DRM (PDF) — Modular software systems are designed to be broken into independent pieces. Each piece has a clear boundary and well-defined interface for ‘hooking’ into other pieces. Progress in most technologies accelerates once systems have achieved this state. But clear boundaries and well-defined interfaces also make a technology easier to attack, break, and reverse-engineer. Well-designed DRMs have very fuzzy boundaries and are designed to have very non-standard interfaces. The examples of the uglified DRM code are inspiring.
  4. DPDKa set of libraries and drivers for fast packet processing […] to: receive and send packets within the minimum number of CPU cycles (usually less than 80 cycles); develop fast packet capture algorithms (tcpdump-like); run third-party fast path stacks.
Comment
Four short links: 8 December 2014

Four short links: 8 December 2014

Systemic Improvement, Chinese Trends, Deep Learning, and Technical Debt

  1. Reith Lectures — this year’s lectures are by Atul Gawande, talking about preventable failure and systemic improvement — topics of particular relevance to devops cultural devotees. (via BoingBoing)
  2. Chinese Mobile App UI Trends — interesting differences between US and China. Phone number authentication interested me: You key in your number and receive a confirmation code via SMS. Here, all apps offer this type of phone number registration/login (if not prefer it). This also applies to websites, even those without apps. (via Matt Webb)
  3. Large Scale Deep Learning (PDF) — Jeff Dean from Google. Starts easy! Starts.
  4. Machine Learning: The High-Interest Credit Card of Technical Debt (PDF) — Google research paper on the ways in which machine learning can create problems rather than solve them.
Comment: 1
Four short links: 30 September 2014

Four short links: 30 September 2014

Continuous Testing, Programmable Bees, Deep Learning on GPUs, and Silk Road Numbers

  1. Continuously Testing Infrastructure — “infrastructure as code”. I can’t figure out whether what I feel are thrills or chills.
  2. Engineer Sees Big Possibilities in Micro-robots, Including Programmable Bees (National Geographic) — He and fellow researchers devised novel techniques to fabricate, assemble, and manufacture the miniature machines, each with a housefly-size thorax, three-centimeter (1.2-inch) wingspan, and weight of just 80 milligrams (.0028 ounces). The latest prototype rises on a thread-thin tether, flaps its wings 120 times a second, hovers, and flies along preprogrammed paths. (via BoingBoing)
  3. cuDNN — NVIDIA’s library of primitives for deep neural networks (on GPUS, natch). Not open source (registerware).
  4. Analysing Trends in Silk Road 2.0If, indeed every sale can map to a transaction, some vendors are doing huge amounts of business through mail order drugs. While the number is small, if we sum up all the product reviews x product prices, we get a huge number of USD $20,668,330.05. REMEMBER! This is on Silk Road 2.0 with a very small subset of their entire inventory. A peek into a largely invisible economy.
Comment
Four short links: 26 September 2014

Four short links: 26 September 2014

Good Communities, AI Games, Design Process, and Web Server Library

  1. 15 Lessons from 15 Years of Blogging (Anil Dash) — If your comments are full of assholes, it’s your fault. Good communities don’t just happen by accident.
  2. Replicating DeepMind — open source attempt to build deep learning network that can play Atari games. (via RoboHub)
  3. ToyTalk — fantastic iterative design process for the product (see the heading “A Bit of Trickery”)
  4. h2oan optimized HTTP server implementation that can be used either as a standalone server or a library.
Comment
Four short links: 19 September 2014

Four short links: 19 September 2014

Deep Learning Bibliography, Go Playground, Tweet-a-Program, and Memory Management

  1. Deep Learning Bibliographyan annotated bibliography of recent publications (2014-) related to Deep Learning.
  2. Inside the Go Playground — on safely offering a REPL over the web to strangers.
  3. Wolfram Tweet-a-Program — clever marketing trick, and reminiscent of Perl Golf-style “how much can you fit into how little” contests.
  4. Memory Management Reference — almost all you ever wanted to know about memory management.
Comment
Four short links: 7 August 2014

Four short links: 7 August 2014

Material Design, Stewart's Slack, Sketching in Javascript, and Neural Networks and Deep Learning

  1. Material Design in the Google I/O App (Medium) — steps through design thinking as they put Google’s new design metaphor in place. I’ve been chewing on material design. It brings an internal consistency and logic to the Android world that Apple’s iOS and OS X visual worlds have been losing over the years. How long until web users expect this consistency too?
  2. Stewart and Slack (Wired) — profile of Foo Stewart Butterfield and his shiny Slack startup.
  3. p5js — a new Processing-inspired code-as-sketching in Javascript. Using the original metaphor of a software sketchbook, p5.js has a full set of drawing functionality. However, you’re not limited to your drawing canvas, you can think of your whole browser page as your sketch!
  4. Neural Networks and Deep Learning — a free online book to teach you … well, neural networks and deep learning.
Comment

How to build and run your first deep learning network

Step-by-step instruction on training your own neural network.

NeuralTree

When I first became interested in using deep learning for computer vision I found it hard to get started. There were only a couple of open source projects available, they had little documentation, were very experimental, and relied on a lot of tricky-to-install dependencies. A lot of new projects have appeared since, but they’re still aimed at vision researchers, so you’ll still hit a lot of the same obstacles if you’re approaching them from outside the field.

In this article — and the accompanying webcast — I’m going to show you how to run a pre-built network, and then take you through the steps of training your own. I’ve listed the steps I followed to set up everything toward the end of the article, but because the process is so involved, I recommend you download a Vagrant virtual machine that I’ve pre-loaded with everything you need. This VM lets us skip over all the installation headaches and focus on building and running the neural networks. Read more…

Comments: 8
Four short links: 15 July 2014

Four short links: 15 July 2014

Data Brokers, Car Data, Pattern Classification, and Hogwild Deep Learning

  1. Inside Data Brokers — very readable explanation of the data brokers and how their information is used to track advertising effectiveness.
  2. Elon, I Want My Data! — Telsa don’t give you access to the data that your cars collects. Bodes poorly for the Internet of Sealed Boxes. (via BoingBoing)
  3. Pattern Classification (Github) — collection of tutorials and examples for solving and understanding machine learning and pattern classification tasks.
  4. HOGWILD! (PDF) — the algorithm that Microsoft credit with the success of their Adam deep learning system.
Comment

What is deep learning, and why should you care?

Announcing a new series delving into deep learning and the inner workings of neural networks.

OrganicNeuron

Editor’s note: this post is part of our Intelligence Matters investigation.

When I first ran across the results in the Kaggle image-recognition competitions, I didn’t believe them. I’ve spent years working with machine vision, and the reported accuracy on tricky tasks like distinguishing dogs from cats was beyond anything I’d seen, or imagined I’d see anytime soon. To understand more, I reached out to one of the competitors, Daniel Nouri, and he demonstrated how he used the Decaf open-source project to do so well. Even better, he showed me how he was quickly able to apply it to a whole bunch of other image-recognition problems we had at Jetpac, and produce much better results than my conventional methods.

I’ve never encountered such a big improvement from a technique that was largely unheard of just a couple of years before, so I became obsessed with understanding more. To be able to use it commercially across hundreds of millions of photos, I built my own specialized library to efficiently run prediction on clusters of low-end machines and embedded devices, and I also spent months learning the dark arts of training neural networks. Now I’m keen to share some of what I’ve found, so if you’re curious about what on earth deep learning is, and how it might help you, I’ll be covering the basics in a series of blog posts here on Radar, and in a short upcoming ebook. Read more…

Comments: 5