FEATURED STORY

Four short links: 4 March 2015

Four short links: 4 March 2015

Go Microservices, Watch Experience, Multithreading Bugs, and Spooks Ahoy

  1. Microservices in Go — tale of rewriting a Ruby monolith as Go microservices. Interesting, though being delivered at Gophercon India suggests the ending is probably not unhappy.
  2. Watch & Wear (John Cross Neumann) — Android watch as predictor of the value and experience of an Apple Watch. I believe this is the true sweet spot for meaningful wearable experiences. Information that matters to you in the moment, but requires no intervention. Wear actually does this extremely well through Google Now. Traffic, Time to Home, Reminders, Friend’s Birthdays, and Travel Information all work beautifully. […] After some real experience with Wear, I think what is more important is to consider what Apple Watch is missing: Google Services. Google Services are a big component of what can make wearing a tiny screen on your wrist meaningful and personal. I wouldn’t be surprised after the initial wave of apps through the app store if Google Now ends up being the killer app for Apple Watch.
  3. Solving 11 Likely Problems In Your Multithreaded Code (Joe Duffy) — a good breakdown of concurrency problems, including lower-level ones than high-level languages expose. But beware. If you try this [accessing variables with synchronisation] on a misaligned memory location, or a location that isn’t naturally sized, you can encounter a read or write tearing. Tearing occurs because reading or writing such locations actually involves multiple physical memory operations. Concurrent updates can happen in between these, potentially causing the resultant value to be some blend of the before and after values.
  4. Obama Sharply Criticizes China’s Plans for New Technology Rules (Reuters) — In an interview with Reuters, Obama said he was concerned about Beijing’s plans for a far-reaching counterterrorism law that would require technology firms to hand over encryption keys, the passcodes that help protect data, and install security “backdoors” in their systems to give Chinese authorities surveillance access. Goose sauce is NOT gander sauce! NOT! Mmm, delicious spook sauce.
Comment

7 user research myths and mistakes

Finding the holes in qualitative and quantitative testing.

testing
I can’t tell you how often I hear things from engineers like, “Oh, we don’t have to do user testing. We’ve got metrics.” Of course, you can almost forgive them when the designers are busy saying things like, “Why would we A/B test this new design? We know it’s better!”

In the debate over whether to use qualitative or quantitative research methods, there is plenty of wrong to go around. So, let’s look at some of the myths surrounding qualitative and quantitative research, and the most common mistakes people make when trying to use them.
Read more…

Comment

Year Zero: Our life timelines begin

In the next decade, Year Zero will be how big data reaches everyone and will fundamentally change how we live.

sneaky_peek_Heart_of_Oak

Editor’s note: this post originally appeared on the author’s blog, Solve for Interesting. This lightly edited version is reprinted here with permission.

In 10 years, every human connected to the Internet will have a timeline. It will contain everything we’ve done since we started recording, and it will be the primary tool with which we administer our lives. This will fundamentally change how we live, love, work, and play. And we’ll look back at the time before our feed started — before Year Zero — as a huge, unknowable black hole.

This timeline — beginning for newborns at Year Zero — will be so intrinsic to life that it will quickly be taken for granted. Those without a timeline will be at a huge disadvantage. Those with a good one will have the tricks of a modern mentalist: perfect recall, suggestions for how to curry favor, ease maintaining friendships and influencing strangers, unthinkably higher Dunbar numbers — now, every interaction has a history.

This isn’t just about lifelogging health data, like your Fitbit or Jawbone. It isn’t about financial data, like Mint. It isn’t just your social graph or photo feed. It isn’t about commuting data like Waze or Maps. It’s about all of these, together, along with the tools and user interfaces and agents to make sense of it.

Every decade or so, something from military or enterprise technology finds its way, bent and twisted, into the mass market. The client-server computer gave us the PC; wide-area networks gave us the consumer web; pagers and cell phones gave us mobile devices. In the next decade, Year Zero will be how big data reaches everyone. Read more…

Comment: 1

What experts do: Curse of the intermediate

A framework for what separates those whose skills continue to build and those who stall out no matter how much they try.

We all know the Curse of Expertise — that thing that makes most experts awful at imagining what it’s like to be a novice. The Curse of Expertise makes tech editors weep and readers seethe. “Where’s the empathy?!” we say, as if the expert had a conscious choice. But they mostly don’t. The Curse of Expertise is not a problem for which MOAR EMPATHY is the solution. Experts don’t lack empathy; they lack the security clearance to the part of their brain where their cognitive biases live. Subconscious cognitive biases. And those biases don’t just make us (experts) fail at predicting the struggle of novices, they can also make us less likely to see novel solutions to well-worn problems.

But given a choice to suddenly be an expert or a novice, we’d pick Curse of the Sucks-To-Be-Me Expert over Curse of the I-Suck-At-This Novice. There’s a third curse, though. The mastery curve is, of course, not binary, but a continuum from first-time to Jiro-Dreams-Of-Sushi. And there in the middle? The Curse of the Intermediate. The Curse of the Intermediate is the worst because it’s the place where hopes and dreams of expertise go to die. The place where even the most patient practicer eventually believes they just don’t have what it takes. Read more…

Comment
Four short links: 2 March 2015

Four short links: 2 March 2015

Onboarding UX, Productivity Vision, Bad ML, and Lifelong Learning

  1. User Onboarding Teardowns — the UX of new users. (via Andy Baio)
  2. Microsoft’s Productivity Vision — always-on thinged-up Internet everywhere, with predictions and magic by the dozen.
  3. Machine Learning Done WrongWhen dealing with small amounts of data, it’s reasonable to try as many algorithms as possible and to pick the best one since the cost of experimentation is low. But as we hit “big data,” it pays off to analyze the data upfront and then design the modeling pipeline (pre-processing, modeling, optimization algorithm, evaluation, productionization) accordingly.
  4. Ten Simple Rules for Lifelong Learning According to Richard Hamming (PLoScompBio) — Exponential growth of the amount of knowledge is a central feature of the modern era. As Hamming points out, since the time of Isaac Newton (1642/3-1726/7), the total amount of knowledge (including but not limited to technical fields) has doubled about every 17 years. At the same time, the half-life of technical knowledge has been estimated to be about 15 years. If the total amount of knowledge available today is x, then in 15 years the total amount of knowledge can be expected to be nearly 2x, while the amount of knowledge that has become obsolete will be about 0.5x. This means that the total amount of knowledge thought to be valid has increased from x to nearly 1.5x. Taken together, this means that if your daughter or son was born when you were 34 years old, the amount of knowledge she or he will be faced with on entering university at age 17 will be more than twice the amount you faced when you started college.
Comment

Topic models: Past, present, and future

The O'Reilly Data Show Podcast: David Blei, co-creator of one of the most popular tools in text mining and machine learning.

card_catalog_2_bookfinch_Flickr

I don’t remember when I first came across topic models, but I do remember being an early proponent of them in industry. I came to appreciate how useful they were for exploring and navigating large amounts of unstructured text, and was able to use them, with some success, in consulting projects. When an MCMC algorithm came out, I even cooked up a Java program that I came to rely on (up until Mallet came along).

I recently sat down with David Blei, co-author of the seminal paper on topic models, and who remains one of the leading researchers in the field. We talked about the origins of topic models, their applications, improvements to the underlying algorithms, and his new role in training data scientists at Columbia University.

Generating features for other machine learning tasks

Blei frequently interacts with companies that use ideas from his group’s research projects. He noted that people in industry frequently use topic models for “feature generation.” The added bonus is that topic models produce features that are easy to explain and interpret:

“You might analyze a bunch of New York Times articles for example, and there’ll be an article about sports and business, and you get a representation of that article that says this is an article and it’s about sports and business. Of course, the ideas of sports and business were also discovered by the algorithm, but that representation, it turns out, is also useful for prediction. My understanding when I speak to people at different startup companies and other more established companies is that a lot of technology companies are using topic modeling to generate this representation of documents in terms of the discovered topics, and then using that representation in other algorithms for things like classification or other things.”

Read more…

Comment