"machine learning" entries

Four short links: 15 July 2015

Four short links: 15 July 2015

OpeNSAurce, Multimaterial Printing, Functional Javascript, and Outlier Detection

  1. System Integrity Management Platform (Github) — NSA releases security compliance tool for government departments.
  2. 3D-Printed Explosive Jumping Robot Combines Firm and Squishy Parts (IEEE Spectrum) — Different parts of the robot grade over three orders of magnitude from stiff like plastic to squishy like rubber, through the use of nine different layers of 3D printed materials.
  3. Professor Frisby’s Mostly Adequate Guide to Functional Programming — a book on functional programming, using Javascript as the programming language.
  4. Tracking Down Villains — the software and algorithms that Netflix uses to detect outliers in their infrastructure monitoring.
Four short links: 8 July 2015

Four short links: 8 July 2015

Encrypted Databases, Product Management, Patenting Machine Learning, and Programming Ethics

  1. Zero Knowledge and Homomorphic Encryption (ZDNet) — coverage of a few startups working on providing databases that don’t need to decrypt the data they store and retrieve.
  2. How Not to Suck at Making ProductsNever confuse “category you’re in” with the “value you deliver.” Customers only care about the latter.
  3. Google Patenting Machine Learning Developments (Reddit) — I am afraid that Google has just started an arms race, which could do significant damage to academic research in machine learning. Now it’s likely that other companies using machine learning will rush to patent every research idea that was developed in part by their employees. We have all been in a prisoner’s dilemma situation, and Google just defected. Now researchers will guard their ideas much more combatively, given that it’s now fair game to patent these ideas, and big money is at stake.
  4. Machine Ethics (Nature) — machine learning ethics versus rule-driven ethics. Logic is the ideal choice for encoding machine ethics, argues Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon. “Logic is how we reason and come up with our ethical choices,” he says. I disagree with his premises.
Four short links: 6 July 2015

Four short links: 6 July 2015

DeepDream, In-Flight WiFi, Computer Vision in Preservation, and Testing Distributed Systems

  1. DeepDream — the software that’s been giving the Internet acid-free trips.
  2. In-Flight WiFi Business — numbers and context for why some airlines (JetBlue) have fast free in-flight wifi while others (Delta) have pricey slow in-flight wifi. Four years ago ViaSat-1 went into geostationary orbit, putting all other broadband satellites to shame with 140 Gbps of total capacity. This is the Ka-band satellite that JetBlue’s fleet connects to, and while the airline has to share that bandwidth with homes across of North America that subscribe to ViaSat’s Excede residential broadband service, it faces no shortage of capacity. That’s why JetBlue is able to deliver 10-15 Mbps speeds to its passengers.
  3. British Library Digitising Newspapers (The Guardian) — as well as photogrammetry methods used in the Great Parchment Book project, Terras and colleagues are exploring the potential of a host of techniques, including multispectral imaging (MSI). Inks, pencil marks, and paper all reflect, absorb, or emit particular wavelengths of light, ranging from the infrared end of the electromagnetic spectrum, through the visible region and into the UV. By taking photographs using different light sources and filters, it is possible to generate a suite of images. “We get back this stack of about 40 images of the [document] and then we can use image-processing to try to see what is in [some of them] and not others,” Terras explains.
  4. Testing a Distributed System (ACM) — This article discusses general strategies for testing distributed systems as well as specific strategies for testing distributed data storage systems.
Four short links: 1 July 2015

Four short links: 1 July 2015

Recovering from Debacle, Open IRS Data, Time Series Requirements, and Error Messages

  1. Google Dev Apologies After Photos App Tags Black People as Gorillas (Ars Technica) — this is how you recover from a unequivocally horrendous mistake.
  2. IRS Finally Agrees to Release Non-Profit Records (BoingBoing) — Today, the IRS released a statement saying they’re going to do what we’ve been hoping for, saying they are going to release e-file data and this is a “priority for the IRS.” Only took $217,000 in billable lawyer hours (pro bono, thank goodness) to get there.
  3. Time Series Database Requirements — classic paper, laying out why time-series databases are so damn weird. Their access patterns are so unique because of the way data is over-gathered and pushed ASAP to the store. It’s mostly recent, mostly never useful, and mostly needed in order. (via Thoughts on Time-Series Databases)
  4. Compiler Errors for Humans — it’s so important, and generally underbaked in languages. A decade or more ago, I was appalled by Python’s errors after Perl’s very useful messages. Today, appreciating Go’s generally handy errors. How a system handles the operational failures that will inevitably occur is part and parcel of its UX.
Four short links: 15 June 2015

Four short links: 15 June 2015

Streams at Scale, Molecular Programming, Formal Verification, and Deep Learning's Flaws

  1. Twitter Heron: Stream Processing at Scale (Paper a Day) — very readable summary of Apache Storm’s failings, and Heron’s improvements.
  2. Molecular Programming Projectaims to develop computer science principles for programming information-bearing molecules like DNA and RNA to create artificial biomolecular programs of similar complexity. Our long-term vision is to establish molecular programming as a subdiscipline of computer science — one that will enable a yet-to-be imagined array of applications from chemical circuitry for interacting with biological molecules to nanoscale computing and molecular robotics.
  3. The Software Analysis Workbenchprovides the ability to formally verify properties of code written in C, Java, and Cryptol. It leverages automated SAT and SMT solvers to make this process as automated as possible, and provides a scripting language, called SAW Script, to enable verification to scale up to more complex systems. “Non-commercial” license.
  4. What’s Wrong with Deep Learning? (PDF in Google Drive) — What’s missing from deep learning? 1. Theory; 2. Reasoning, structured prediction; 3. Memory, short-term/working/episodic memory; 4. Unsupervised learning that actually works. … and then ways to get those things. Caution: math ahead.
Comment: 1

The business value of unifying data

Practical applications of human-in-the-loop machine learning.


With hundreds, thousands, or even just tens of suppliers — each with different business units, payment terms, and locations — businesses are faced with a monumental task: unifying all of their supplier-related data, and fast so that it can be useful. In order to ask deep questions about their data, companies are increasingly looking for a single, unified view of their supply chain.

And yet, business data is often stored in different sources, systems, and formats, resulting in silos of information. These data silos take the form of enterprise resource planning systems, CSV files, spreadsheets, and relational databases. To pull together all of the data from these disparate sources, a business faces three interrelated challenges:

  1. Speed. Traditionally, businesses have attempted to catalog and organize supply chain data manually — profiling and integrating data themselves, which leads directly to the next challenge: cost.
  2. Cost. Manual work is expensive work. Usually more than one employee will need to work on the same data set in order to move quickly enough for the results to have any value for the business. Even with several employees working on the same data sets, this work will still not achieve what could be done on a machine scale.
  3. Efficiency. Relying completely on humans to organize and unify data is a situation ripe for error. Plus, there’s often no audit trail, and the work results in inherently incomplete views of information.

In a recent live demo by Dr. Clare Bernard, a field engineer at Tamr, I got a glimpse into how Tamr is using a combination of machine learning algorithms and input from subject matter experts to help businesses unify their data for analysis. A practice that uses short-term human intervention to actively improve machine models, human-in-the-loop machine learning is taking off across all types of industries, including fashion, automotive, and cloud services such as Google Maps. Read more…

Comment: 1
Four short links: 27 May 2015

Four short links: 27 May 2015

Domo Arigato Mr Google, Distributed Graph Processing, Experiencing Ethics, and Deep Learning Robots

  1. Roboto — Google’s signature font is open sourced (Apache 2.0), including the toolchain to build it.
  2. Pregel: A System for Large Scale Graph Processing — a walk through a key 2010 paper from Google, on the distributed graph system that is the inspiration for Apache Giraph and which sits under PageRank.
  3. How to Turn a Liberal Hipster into a Global Capitalist (The Guardian) — In Zoe Svendsen’s play “World Factory at the Young Vic,” the audience becomes the cast. Sixteen teams sit around factory desks playing out a carefully constructed game that requires you to run a clothing factory in China. How to deal with a troublemaker? How to dupe the buyers from ethical retail brands? What to do about the ever-present problem of clients that do not pay? […] And because the theatre captures data on every choice by every team, for every performance, I know we were not alone. The aggregated flowchart reveals that every audience, on every night, veers toward money and away from ethics. I’m a firm believer that games can give you visceral experience, not merely intellectual knowledge, of an activity. Interesting to see it applied so effectively to business.
  4. End to End Training of Deep Visuomotor Policies (PDF) — paper on using deep learning to teach robots how to manipulate objects, by example.
Four short links: 25 May 2015

Four short links: 25 May 2015

8 (Bits) Is Enough, Second Machine Age, LLVM OpenMP, and Javascript Graphs

  1. Why Are Eight Bits Enough for Deep Neural Networks? (Pete Warden) — It turns out that neural networks are different. You can run them with eight-bit parameters and intermediate buffers, and suffer no noticeable loss in the final results. This was astonishing to me, but it’s something that’s been re-discovered over and over again.
  2. The Great Decoupling (HBR) — The Second Machine Age is playing out differently than the First Machine Age, continuing the long-term trend of material abundance but not of ever-greater labor demand.
  3. OpenMP Support in LLVMOpenMP enables Clang users to harness full power of modern multi-core processors with vector units. Pragmas from OpenMP 3.1 provide an industry standard way to employ task parallelism, while ‘#pragma omp simd’ is a simple yet flexible way to enable data parallelism (aka vectorization).
  4. JS Graphs — a visual catalogue (with search) of Javascript graphing libraries.
Four short links: 14 May 2015

Four short links: 14 May 2015

Human-Machine Cooperation, Concurrent Systems Books, AI Future, and Gesture UI

  1. Ghosts in the Machines (Courtney Nash) — People are neither masters of machines, nor subservient to their machine-learning outcomes — we cannot, and should not, separate the two. We are actors, together, in a very complex system. David Woods calls this “joint cognitive systems.”
  2. TLA+ (Leslie Lamport) — two tutorials: “Principles of Concurrent Computing” and “Specification of Concurrent Systems.” Ironically, I see people grizzling that the book on distributed systems hasn’t been linearised. I wonder if you can partition it into the two tutorials and still have full availability…
  3. Deep Learning vs Probabilistic vs LogicAs of 2015, I pity the fool who prefers Modus Ponens over Gradient Descent.
  4. Touché (Disney Research) — measur[es] capacitive response of object and human at multiple frequencies, a technique that we called Swept Frequency Capacitive Sensing. The signal travels through different paths depending on its frequency, capturing the posture of human hand and body as well as other properties of the context. The resulted data is classified using machine learning algorithms to identify gestures that are then used to trigger desired responses of the user interface.

Topic modeling for the newbie

Learning the fundamentals of natural language processing.

Get “Data Science from Scratch” at 50% off with code DATA50. Editor’s note: This is an excerpt from our recent book Data Science from Scratch, by Joel Grus. It provides a survey of topics from statistics and probability to databases, from machine learning to MapReduce, giving the reader a foundation for understanding, and examples and ideas for learning more.

When we built our Data Scientists You Should Know recommender in Chapter 1, we simply looked for exact matches in people’s stated interests.

A more sophisticated approach to understanding our users’ interests might try to identify the topics that underlie those interests. A technique called Latent Dirichlet Analysis (LDA) is commonly used to identify common topics in a set of documents. We’ll apply it to documents that consist of each user’s interests.

LDA has some similarities to the Naive Bayes Classifier we built in Chapter 13, in that it assumes a probabilistic model for documents. We’ll gloss over the hairier mathematical details, but for our purposes the model assumes that:

  • There is some fixed number K of topics.
  • There is a random variable that assigns each topic an associated probability distribution over words. You should think of this distribution as the probability of seeing word w given topic k.
  • There is another random variable that assigns each document a probability distribution over topics. You should think of this distribution as the mixture of topics in document d.
  • Each word in a document was generated by first randomly picking a topic (from the document’s distribution of topics) and then randomly picking a word (from the topic’s distribution of words).

In particular, we have a collection of documents, each of which is a list of words. And we have a corresponding collection of document_topics that assigns a topic (here a number between 0 and K – 1) to each word in each document. Read more…