- The Transformation of the Workplace Through Robotics, Artificial Intelligence, and Automation — fascinating legal questions about the rise of the automated workforce. . Is an employer required to bargain if it wishes to acquire robots to do work previously performed by unionized employees working under a collective bargaining agreement? does the collective bargaining agreement control the use of robots to perform this work? A unionized employer seeking to add robots to its business process must consider these questions. (via Robotenomics)
- The Invasive Valley of Personalization (Maria Anderson) — there is a fine line between useful personalization and creepy personalization. It reminded me of the “uncanny valley” in human robotics. So I plotted the same kind of curves on two axes: Access to Data as the horizontal axis, and Perceived Helpfulness on the vertical axis. For technology to get vast access to data AND make it past the invasive valley, it would have to be perceived as very high on the perceived helpfulness scale.
- Coffee and Feature Creep — fantastic story of how a chat system became a bank. (via BoingBoing)
- The Rise and Fall of PCs — use this slide of market share over time by device whenever you need to talk about the “post-PC age”. (via dataisugly subreddit)
ENTRIES TAGGED "personalization"
Legal Automata, Invasive Valley, Feature Creep, and Device Market Share
What the data is must be linked to how it can be used.
Data doesn’t invade people’s lives. Lack of control over how it’s used does.
What’s really driving so-called big data isn’t the volume of information. It turns out big data doesn’t have to be all that big. Rather, it’s about a reconsideration of the fundamental economics of analyzing data.
For decades, there’s been a fundamental tension between three attributes of databases. You can have the data fast; you can have it big; or you can have it varied. The catch is, you can’t have all three at once.
I’d first heard this as the “three V’s of data”: Volume, Variety, and Velocity. Traditionally, getting two was easy but getting three was very, very, very expensive.
The advent of clouds, platforms like Hadoop, and the inexorable march of Moore’s Law means that now, analyzing data is trivially inexpensive. And when things become so cheap that they’re practically free, big changes happen — just look at the advent of steam power, or the copying of digital music, or the rise of home printing. Abundance replaces scarcity, and we invent new business models.
In the old, data-is-scarce model, companies had to decide what to collect first, and then collect it. A traditional enterprise data warehouse might have tracked sales of widgets by color, region, and size. This act of deciding what to store and how to store it is called designing the schema, and in many ways, it’s the moment where someone decides what the data is about. It’s the instant of context.
That needs repeating:
You decide what data is about the moment you define its schema.
Why a new proposal for making the news business sustainable deserves attention.
A new paper from the Reynolds Journalism Institute deserves a look from anyone interested in publishing, social networking, or democratic discourse.
Why being a default search provider matters, personalized Google News, Bin Laden and search spikes
In the latest Search Notes: Bing is going all out to claim more market share, Google News' personalization features could create an echo chamber, and Osama Bin Laden's death creates a search frenzy.
Penguin's new project — dubbed "Penguin 2.0" — incorporates elements of customization and remixing found in Web content. Jeff Gomez, Penguin's senior director of online consumer sales and marketing, discusses the program with the New York Observer: … in 2009 the company will introduce a program that allows customers to choose from a variety of short stories, essays, and…