- 4043-byte 8086 Emulator — manages to implement most of the hardware in a 1980’s era IBM-PC using a few hundred fewer bits than the total number of transistors used to implement the original 8086 CPU. Entry in the obfuscated C contest.
- Hacking the CES Scavenger Hunt — At which point—now you have your own iBeacon hardware—you can just go ahead and set the UUID, Major and Minor numbers of your beacon to each of the CES scavenger hunt beacon identities in turn, and then bring your beacon into range of your cell phone running which should be running the CES mobile app. Once you’ve shown the app all of the beacons, you’ll have “finished” the scavenger hunt and can claim your prize. Of course doing that isn’t legal. It’s called fraud and will probably land you in serious trouble. iBeacons have great possibilities, but with great possibilities come easy hacks when they’re misused.
- Filtering: Seven Principles — JP Rangaswami laying down some basic principles on which filters should be built. 1. Filters should be built such that they are selectable by subscriber, not publisher. I think the basic is: 0: Customers should be able to run their own filters across the information you’re showing them.
- Tremor-Correcting Steadicam — brilliant use of technology. Sensors + microcontrollers + actuators = a genuinely better life. Beats figuring out better algorithms to pimp eyeballs to Brands You Love. (via BoingBoing)
ENTRIES TAGGED "sensors"
Enable indoor location services with Bluetooth Low Energy alerts
In the last couple of months, iBeacon is making a lot of noise. iBeacons are small wireless sensors placed inside any physical space that transmit data to your phone using Bluetooth Low Energy (also known as Bluetooth 4.0 and Bluetooth Smart). Using Bluetooth Low Energy (BLE), iBeacon opens up new opportunities by creating a beacon around regions so your app can be alerted when users enter them. Apple quietly rolled out the iBeacons framework as part of iOS 7, but lots of iBeacon manufacturers (Estimote, Roximity Beacons, Adomalay, Kontact etc.) are already emerging. It is going to play an important role in several areas.
The Jawbone UP shows the promise available in all kinds of wearable sensors.
In a recent conversation, I described my phone as “everything that Compaq marketing promised the iPAQ was going to be.” It was the first device I really carried around and used as an extension of my normal computing activities. Of course, everything I did on the iPAQ can be done much more easily on a smartphone these days, so my iPAQ sits in a closet, hoping that one day I might notice and run Linux on it.
In the decade and a half since the iPAQ hit the market, battery capacity has improved and power consumption has gone down for many types of computing devices. In the Wi-Fi arena, we’ve turned phones into sensors to track motion throughout public spaces, and, in essence, “outsourced” the sensor to individual customers.
Phones, however, are relatively large devices, and the I/O capabilities of the phone aren’t needed in most sensor operations. A smartphone today can measure motion and acceleration, and even position through GPS. However, in many cases, display isn’t needed on the sensor itself, and the data to be collected might need another type of sensor. Many inexpensive sensors are available today to measure temperature, humidity, or even air quality. By moving the I/O from the sensor itself onto a centralized device, the battery power can be devoted almost entirely to collecting data. Read more…
Software and hardware are moving together, and the combined result is a new medium.
The result is an entirely new medium that’s just beginning to emerge. We can see it in Ars Electronica Futurelab’s Spaxels, which use drones to render a three-dimensional pixel field; in Baxter, which layers emotive software onto an industrial robot so that anyone can operate it safely and efficiently; in OpenXC, which gives even hobbyist-level programmers access to the software in their cars; in SmartThings, which ties Web services to light switches.
The new medium is something broader than terms like “Internet of Things,” “Industrial Internet,” or “connected devices” suggest. It’s an entirely new discipline that’s being built by software developers, roboticists, manufacturers, hardware engineers, artists, and designers. Read more…
A tool for outreach to patients produces unexpected benefits
The traditional, office-based model for health care is episodic. The provider-patient relationship exists almost completely within the walls of the exam room, with little or no follow-up between visits. Data is primarily episodic as well, based on blood pressure reading done at a specific time or surveys administered there and then, with little collected out of the office. And even the existing data collection tools—paper diaries or clunky meters—are focused more on storing data that on connecting the patient and provider through that data in real time.
There is no way to get in touch when, for instance, a patient’s blood sugar starts varying wildly or pain levels change. The provider often depends on the patient reaching out to them. And even when a provider does put into place an outreach protocol, it is usually very crude, based on a general approach to managing a population as opposed to an understanding of a patient. The end result is a system that, while doing its best within a difficult setting, is by default reactive instead of proactive.
Don't be afraid of the bus
After a short period of time, beginners working with the Arduino development boards often find themselves wanting to work with a greater range of input or sensor devices—such as real-time clocks, temperature sensors, or digital potentiometers.
However each of these can often require connection by one of the two digital data buses, known as SPI and I2C. After searching around the Internet, inexperienced users may become confused about the bus type and how to send and receive data with them, then give up.
This is a shame as such interfaces are quite simple to use and can be easily understood with the right explanation. For example let’s consider the I2C bus. It’s a simple serial data link that allows a master device (such as the Arduino) to communicate with one or more slave devices (such as port expanders, temperature sensors, EEPROMs, real-time clocks, and more).
I’m increasingly realizing that many of my gripes about applications these days are triggered by their failure to understand my context in the way that they can and should. For example:
- Unruly apps on my Android phone, like gmail or twitter, update messages in the background as soon as I wake up my phone, slowing the phone to a crawl and making me wait to get to the app I really want. This is particularly irritating when I’m trying to place a phone call or write a text, but it can get in the way in the most surprising places. (It’s not clear to me if this is the application writers’ fault, or due to a fundamental flaw in the Android OS.)
- Accuweather for Android, otherwise a great app, lets you set up multiple locations, which seems like it would be very handy for frequent travelers, but it inexplicably defaults to whichever location you set up first, regardless of where you are. Not only does it ignore the location sensor in the phone, it doesn’t even bother to remember the last location I chose.
- The WMATA app (Washington Metro transit app) I just downloaded lets you specify and save up to twelve bus stops for which it will report next bus arrival times. Why on earth doesn’t it just detect what bus stop you are actually standing at?
- And it’s not just mobile apps. Tweetdeck for Mac lets you schedule tweets for later in the day or on future dates, yet it defaults to the date that you last used the feature rather than today’s date! How frustrating is it to set the time of the tweet for the afternoon, only to be told “Cannot schedule a tweet for the past”, because you didn’t manually update the date to today!
In each of these cases, programmers seemingly have failed to understand that devices have senses, and that consulting those senses is the first step in making the application more intelligent. It’s as if a human, on awaking, blundered down to breakfast without opening his or her eyes!
By contrast, consider how some modern apps have used context awareness to create brilliant, transformative user experiences:
- Uber, recognizing both where you are and where the nearest driver is, gives you an estimated time of pickup, connects the two of you, and lets you track progress towards that pickup.
- Square Register notices anyone running Square Wallet entering the store, and pops up their name, face, and stored payment credentials on the register, creating a delightful in-store checkout experience.
- Apps like FourSquare and Yelp are like an augmentation that adds GPS as a human sixth sense, letting you find restaurants and other attractions nearby. Google Maps does all that and more. (Even Google Maps sometimes loses the plot, though. For example, yesterday afternoon, I was on my way to Mount Vernon. Despite the fact that I was in Virginia, a search unadorned with the state name gave me as a first result Mount Vernon WA, rather than Mount Vernon VA. I’ve never understood how an app that can, and does, suggest the correct street name nearby before I’ve finished typing the building number can sometimes go so wrong.)
- Google Now, while still a work in progress, does an ambitious job of intuiting things you might want to know about your environment. It understands your schedule, your location, the weather, the time, and things you have asked Google to remember on your behalf. It sometimes suggests things that you don’t care about, but I’d far rather than than an idiot application that requires me to use keystrokes or screen taps to tell the app things that my phone already knows.
Just as the switch from the command line to the GUI required new UI skills and sensibilities, mobile and sensor-based programming creates new opportunities to innovate, to surprise and delight the user, or, in failing to use the new capabilities, the opportunity to create frustration and anger. The bar has been raised. Developers who fail to embrace context-aware programming will eventually be left behind.