Local retail revival won't hinge on online-style consumer data intrusion; it will require getting back to basics.
About a month ago, IBM published its five tech predictions for the next few years. They’re mostly the sort of unexceptional things one predicts in this sort of article — except for one: the return of local retail.
This is a fascinating idea, both in the ways I agree and the ways I disagree. First, I don’t think local retail is quite as dead as many people thought. Now that Borders is no longer with us and Barnes and Noble is on the ropes, I see more activity in local bookstores. And the shopping district in the center of my town is full; granted, we’re talking reasonably prosperous suburbia, not Detroit, but not too many years ago there was no shortage of empty storefronts.
What surprised me was the reason IBM thought local retail would return. They observed that many of the same techniques that Amazon and other online retailers use can be applied locally. You walk into a store; you’re identified by your cell phone (or some other device); the store can look up your purchase history, online history, etc.; it can then generate purchase recommendations based on inventory; and send over a salesperson — with an informed view of who you are, what you’re likely to buy, and so on — to “help” you. Read more…
As robots integrate more and more into our lives, they'll simply become part of normal, everyday reality — like dishwashers.
(Note: this post first appeared on Forbes; this lightly edited version is re-posted here with permission.)
We’ve watched the rising interest in robotics for the past few years. It may have started with the birth of FIRST Robotics competitions, continued with the iRobot and the Roomba, and more recently with Google’s driverless cars. But in the last few weeks, there has been a big change. Suddenly, everybody’s talking about robots and robotics.
It might have been Jeff Bezos’ remark about using autonomous drones to deliver products by air. It’s a cool idea, though I think it’s farfetched, but that’s another story. Amazon Prime isn’t Amazon’s first venture into robotics: a year and a half ago, they bought Kiva Systems, which builds robots that Amazon uses in their massive warehouses. (Personally, I think package delivery by drone is unlikely for many, many reasons, but that’s another story, and certainly no reason for Amazon not to play with delivery in their labs.)
But what really lit the fire was Google’s acquisition of Boston Dynamics, a DARPA contractor that makes some of the most impressive mobile robots anywhere. It’s hard to watch their videos without falling in love with what their robots can do. Or becoming very scared. Or both. And, of course, Boston Dynamics isn’t a one-time buy. It’s the most recent in a series of eight robotics acquisitions, and I’d bet that it’s not the last in the series. Read more…
Twitter isn't quite beyond jumping the shark, but it has taken a big step backward.
While I’ve been skeptical of Twitter’s direction ever since they decided they no longer cared about the developer ecosystem they created, I have to admit that I was impressed by the speed at which they rolled back an unfortunate change to their “blocking” feature. Yesterday afternoon, Twitter announced that when you block a user, that user would not be unsubscribed to your tweets. And sometime last night, they reversed that change.
I admit, I was surprised by the immediate outraged response to the change, which was immediately visible on my Twitter feed. I don’t block many people on Twitter — mostly spammers, and I don’t think spammers are interested in reading my tweets, anyway. So, my first reaction was that it wasn’t a big deal. But as I read the comments, I realized that it was a big deal: people complaining of online harassment, trolls driving away their followers, and more.
So yes, this was a big deal. And I’m very glad that Twitter has set things right. In the past years, Twitter has seemed to me to be jumping the shark in small steps, rather than a single big leap. If you think about it, this is how it always happens. You don’t suddenly wake up and find you’ve become the evil empire; it’s a death of a thousand cuts. Read more…
Computing should enable us to have richer lives; it shouldn’t become life.
At a recent meeting, Tim O’Reilly, referring to the work of Tristan Harris and Joe Edelman, talked about “software of regret.” It’s a wonderfully poetic phrase that deserves exploring.
For software developers, the software of regret has some very clear meanings. There’s always software that’s written poorly and incurs too much technical debt that is never paid back. There’s the software you wrote before you knew what you were doing, but never had time to fix; the software that you wrote under an inflexible and unrealistic schedule; and those three lines of code that are an awful hack, but you couldn’t get to work any other way.
That’s not what Tim was talking about, though. The software of regret is software that you use for an hour or two, and then hate yourself for using it. Facebook? Candy Crush? Tumblr? Words with Friends? YouTube? Pick your own; they’re fun for a while, but after a couple of hours, you wonder where the evening went and wish you had done something worthwhile. It’s software that only views us as targets for marketing: as views, eyeballs, and clicks. Can we change the metrics? As Edelman says, rather than designing to maximize clicks and page views, can we design to maximize fulfillment? Could Facebook measure friendships nurtured, rather than products liked?
Computing should enable us to have richer lives; it shouldn’t become life. That’s really what the software of regret is all about: taking over your life and preventing you from engaging with a world that is ultimately a lot richer than a flat, but high-resolution, screen. It’s certainly harder to avoid writing the software of regret than it is to avoid writing spaghetti code that will make your life miserable when the bug reports start rolling in. But probably more important. Do stuff that matters.
Technology has changed, but humans haven't — what is it about mediating an experience through a frame that makes it seem better?
ImpactLab has posted a nice pair of photos contrasting 2005 and 2013 in St. Peter’s Square. 2005 looks pretty much as you’d expect: lots of people in a crowd. In 2013, though, everyone is holding up a tablet, either photographing or perhaps even watching the event through the tablet.
The ImpactLab post asks about the changes in our technology during these eight years. That’s interesting, but not what grabs me. What gets me is that this isn’t new. In the 18th century, one fad was to view nature through a portable picture frame. I wasn’t able to find this in a quick Google search, but screw the documentation. I’ve seen these things in a museum: they look like a miniature gilded picture frame, roughly the size of an iPad, with a stick coming from the corner so you can hold it before your eyes. So you’d sit in your carriage with the curtains open, look out the window through this frame, and see a moving picture. A slightly higher-tech variant of this is the Claude Glass (see, I can haz links), in which you viewed the natural scene through a slightly tinted mirror, to make it look even more like a painting. (This is arguably the origin of the term “picturesque.”) Read more…
Raise consciousness about the silliness of the Black Friday ritual — do anything but shop.
This time last year, Cathy O’Neil and I traded emails about the US’s annual orgy of consumerism. I promised her an article for her Mathbabe blog, which I still owe her, and we wondered how to raise consciousness about the silliness of the Black Friday ritual.
I remembered something a friend and I did back in grad school: we waited in line for movies. And that was it — we went to downtown Palo Alto, picked the movie theater with the longest line (this when the homogeneous corporate 87-screen chainplexes were just getting started), and waited in the line. We had no intention of watching the movie, so if the person in back of us was at all anxious, we’d let them cut in front. And we’d explain “we’re just waiting in line; we’re not seeing the movie anyway, so go ahead — it’s cool.” People thought this was strange, or funny, or whatever. When we got to the ticket counter, we excused ourselves and went back to the end of the line, or maybe to another theater; I don’t remember.
I’m no longer going to get up at midnight to wait in line for the local Walmart to open its doors, but I’d like to know that someone did this, and in the process, raised awareness of our addiction to consumerism, raised of awareness of workers who aren’t paid adequately. Sing Christmas Carols. Sing Channukah songs. Sing old Beatles tunes. Sing Atheist Carols, if anyone has written any. Do anything but shop.
And I’d like to hear any other ideas about pranking this most ridiculous of national rituals.
USB could make power consumption more intelligent, but security concerns need to be addressed.
I’ve been reading about enhancements to the USB 3.0 standard that would allow a USB cable to provide up to 100 watts of power, nicely summarized in The Economist. 100 watts is more than enough to charge a laptop, and certainly enough to power other devices, such as LED lighting, televisions, and audio equipment. It could represent a significant shift in the way we distribute power in homes and offices: as low voltage DC, rather than 110 or 220 volt AC. Granted, 100 watts won’t power a stove, a refrigerator, or a toaster, but in a USB world, high-voltage power distribution could be limited to a few rooms, just like plumbing; the rest of the building could be wired with relatively inexpensive USB cables and connectors, and the wiring could easily be done by amateurs rather than professional electricians.
It’s an interesting and exciting idea. As The Economist points out, the voltages required for USB are easily compatible with solar power. Because USB cables also carry data, power consumption can become more intelligent.
But I have one concern that I haven’t seen addressed in the press. Of course USB cables carry both data and power. So, when you plug your device into a USB distribution system, whether it’s a laptop or phone, you’re plugging it into a network. And there are many cases, most notoriously Stuxnet, of computers being infected with malware through their USB ports. Read more…
Feedback is an elegant and effective way to control complex, dynamic processes.
Everyone knows what feedback is. It’s when sound systems suddenly make loud, painful screeching sounds. And that answer is correct, at least partly.
Control theory, the study and application of feedback, is a discipline with a long history. If you’ve studied electrical or mechanical engineering, you’ve probably confronted it. Although there’s an impressive and daunting body of mathematics behind control theory, the basic idea is simple. Whenever you have a varying signal, you can use feedback to control the signal, giving you a consistent output. Screaming amps at a concert are nothing but a special case in which things have gone wrong.
We use control theory all the time, without even thinking about it. We couldn’t walk if it weren’t for our body’s instinctive use of feedback; upsetting that feedback system (for example, by spinning to become dizzy) makes you fall. When you’re driving a car, you ease off the accelerator when it’s going too fast. You press the accelerator when it’s going too slow. If you undercorrect, you’ll end up going too fast (or stopping); if you overcorrect, you’ll end up jerking forward, slamming on the brakes, then jerking forward again — possibly with disastrous consequences. Cruise control is nothing more than a robotic implementation of the same feedback loop. Read more…
Our readers are the largest group of DIY biologists ever assembled.
We’ve been having a great time — more than 6,000 downloads, almost 13,000 visits to the landing page, and we don’t know how many people have shared it. Ryan Bethencourt observed that our readers are the largest group of DIY biologists that has ever been assembled. This is big — and we still don’t know how big.
Thanks for a great start! We’re looking forward to a second issue in mid-Jauary. And if you haven’t yet read the first issue of BioCoder, it’s time for you to check it out.