- Ludicrously Fast Page Loads: A Guide for Full-Stack Devs (Nate Berkopec) — steps slowly through the steps of page loading using Chrome Developer Tools’ timeline. Very easy to follow.
- Specialised and Hybrid Data Management and Processing Engines (Ben Lorica) — wrap-up of data engines uncovered at Strata + Hadoop World NYC 2015.
- Power of Small Groups (Matt Webb) — Matt’s joined a small Slack community of like-minded friends. There’s a space where articles written or edited by members automatically show up. I like that. I caught myself thinking: it’d be nice to have Last.FM here, too, and Dopplr. Nothing that requires much effort. Let’s also pull in Instagram. Automatic stuff so I can see what people are doing, and people can see what I’m doing. Just for this group. Back to those original intentions. Ambient awareness, togetherness. cf Clay Shirky’s situated software. Everything useful from 2004 will be rebuilt once the fetish for scale passes.
- Asymmetric Misperceptions (PDF) — research into the systematic mismatch between how politicians think their constituents feel on issues, and how the constituents actually feel. Our findings underscore doubts that policymakers perceive opinion accurately: politicians maintain systematic misperceptions about constituents’ views, typically erring by over 10 percentage points, and entire groups of politicians maintain even more severe collective misperceptions. A second, post-election survey finds the electoral process fails to ameliorate these misperceptions.
With more companies focusing on design as a competitive advantage, it seems as if everyone is suddenly a designer.
The more that I talk to people about what it means to explain design, the more I realize that everyone across all types of organizations — from product companies to nonprofits to universities to health care — is intensely interested in it. Everyone now has an opinion about design, and we’ve all been in the position of having to defend our choices or suggestions.
Developers, product owners, project managers, and even CEOs are intimately involved in design processes now — increasingly, it seems as if everyone is a designer. But it hasn’t always been this way — so, why now do so many people have an opinion about design?
In the past decade, design and UX has gone “mainstream.” The most popular and interesting companies have put design at the forefront of their product offerings, creating a buzz culture that drools over every new release and a fan following that promotes their brand for them. I’m not only thinking of Apple, but also brands such as IKEA, innovators like Tesla, and unique problem-solving designs from Dyson, Segway, or Nest. These brands command respect, elicit strong opinions, and foster loyalty from the people who follow them. This elevation of design conversations within today’s companies , organizations, and throughout the public in general exemplifies a democratization of design that we haven’t before experienced.
Here, I’ll explore several factors contributing to design’s growing ubiquity.
Social media has changed how people view digital products
It’s not only physical products that have transformed our understanding of the value of design. Social media platforms have shown that UX is a critical component to success. Millions of people use Facebook every single day. Each minor tweak to the UI or change to the design incurs the praise or wrath of every user. Why? Because Facebook (and other services like it) is a very personal part of our lives. Never before have we had a platform for sharing the most intimate and mundane details of our everyday experiences. Read more…
The O'Reilly Radar Podcast: Rajiv Maheswaran on the science of moving dots, and Claudia Perlich on big data in advertising.
Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.
In this week’s Radar Podcast episode, O’Reilly’s Mac Slocum chats with Rajiv Maheswaran, CEO of Second Spectrum. Maheswaran talks about machine learning applications in sports, the importance of context in measuring stats, and the future of real-time, in-game analytics.
Here are some highlights from their chat:
There’s a lot of parts of the game of basketball — pick and rolls, dribble hand-offs — that coaches really care about, about analyzing how it works on offense, how to guard them. Before big data and machine learning, people basically watched the games and marked them. It turns out that people are pretty bad at marking them accurately, and they also miss a ton of stuff. Right now, machine learning tells coaches, ‘This is how many pick and rolls these two players have had over the course of the season, how often they do all the different variations, what they’re good at, what they’re bad at.’ Coaches can really find tendencies that can help them play offense, play defense, far more efficiently, based off of machine learning.
What we’re doing is having the machine match human intuition. If I’m watching a game, I know that the shot is harder if I’m farther away, if I have multiple defenders, if they’re close, if they’re closing in on me, if I’m dribbling, the type of shot I’m taking. As a human, I watch this and I have an intuition about it. Now, by giving all that data to the machine, it can make a predictor that actually matches our intuition, and goes beyond it because it can put a number onto what our intuition tells us.
The O’Reilly Data Show podcast: Todd Lipcon on hybrid and specialized tools in distributed systems.
Subscribe to the O’Reilly Data Show Podcast to explore the opportunities and techniques driving big data and data science.
In recent months, I’ve been hearing about hybrid systems designed to handle different data management needs. At Strata + Hadoop World NYC last week, Cloudera’s Todd Lipcon unveiled an open source storage layer — Kudu — that’s good at both table scans (analytics) and random access (updates and inserts).
While specialized systems will continue to serve companies, there will be situations where the complexity of maintaining multiple systems — to eke out extra performance — will be harder to justify.
During the latest episode of the O’Reilly Data Show Podcast, I sat down with Lipcon to discuss his new project a few weeks before it was released. Here are a few snippets from our conversation:
HDFS and Hbase
[Hadoop is] more like a file store. It allows you to upload files onto an arbitrarily sized cluster with 20-plus petabytes, in single clusters. The thing is, you can upload the files but you can’t edit them in place. To make any change, you have to basically put in a new file. What HBase does in distinction is that it has more of a tabular data model, where you can update and insert individual row-by- row data, and then randomly access that data [in] milliseconds. The distinction here is that HDFS is pretty good for large scans where you’re putting in a large data set, maybe doing a full parse over the data set to train a machine learning model or compute an aggregate. If any of that data changes on a frequent basis or if you want to stream the data in or randomly access individual customer records, you’re kind of out of luck on HDFS. Read more…
Elevate automation through orchestration.
As sysadmins we have been responsible for running applications for decades. We have done everything to meet demanding SLAs including “automating all the things” and even trading sleep cycles to recuse applications from production fires. While we have earned many battle scars and can step back and admire fully automated deployment pipelines, it feels like there has always been something missing. Our infrastructure still feels like an accident waiting to happen and somehow, no matter how much we manage to automate, the expense of infrastructure continues to increase.
The root of this feeling comes from the fact that many of our tools don’t provide the proper insight into what’s really going on and require us to reverse engineer applications in order to effectively monitor them and recover from failures. Today many people bolt on monitoring solutions that attempt to probe applications from the outside and report “health” status to a centralized monitoring system, which seems to be riddled with false alarms or a list of alarms that are not worth looking into because there is no clear path to resolution.
What makes this worse is how we typically handle common failure scenarios such as node failures. Today many of us are forced to statically assign applications to machines and manage resource allocations on a spreadsheet. It’s very common to assign a single application to a VM to avoid dependency conflicts and ensure proper resource allocations. Many of the tools in our tool belt have be optimized for this pattern and the results are less than optimal. Sure this is better than doing it manually, but current methods are resulting in low resource utilization, which means our EC2 bills continue to increase — because the more you automate, the more things people want to do.
How do we reverse course on this situation? Read more…
Can VR become “the ultimate empathy machine?”
Virtual reality (VR) can make the impossible possible — the rules of physical reality need no longer apply. In VR, you strap on a special headset and leave the real world behind to enter a virtual one. You can fly like a bird high above Manhattan, or experience the feeling of weightlessness as an astronaut on a spaceship.
VR is reliant upon the illusion of being deeply engrossed in another space and time, far away from your current reality. In a split second you can travel to exotic locales or be on stage at a concert with your favourite musician. Gaming and entertainment are natural fits for VR experiences. A startup called The Void plans to open a set of immersive virtual reality theme parks called Virtual Entertainment Centers, with the first one opening in Pleasant Grove, Utah by June 2016.
This is an exciting time for developers and designers to be defining VR as a new experience medium. However, as the technology improves and consumer hardware and content become available in VR, we must ask: how can this new technology be applied to benefit humanity?
As it turns out, this question is being explored on a few early fronts. For example, SnowWorld, developed at the University of Washington Human Interface Technology (HIT) Lab in 1996 by Hunter Hoffman and David Patterson, was the first immersive VR world designed to reduce pain in adults and children. SnowWorld was specifically developed to help burn patients during wound care. Read more…