The iPhone: Tricorder Version 1.0?

The iPhone, in addition to revolutionizing how people thought about mobile phone user interfaces, also was one of the first devices to offer a suite of sensors measuring everything from the visual environment to position to acceleration, all in a package that could fit in your shirt pocket.

On December 3rd, O’Reilly will be offering a one-day online edition of the Where 2.0 conference, focusing on the iPhone sensors, and what you can do with them. Alasdair Allan (the University of Exeter and Babilim Light Industries) and Jeffrey Powers (Occipital) will be among the speakers, and I recently spoke with each of them about how the iPhone has evolved as a sensing platform and the new and interesting things being done with the device.

Occipital is probably best known for Red Laser, the iPhone scanning application that lets you point the camera at a UPC code and get shopping information about the product. With recent iPhone OS releases, applications can now overlay data on top of a real time camera display, which has led to the new augmented reality applications. But according to Powers, the ability to process the camera data is still not fully supported, which has left Red Laser in a bit of a limbo state. “What happened with the most recent update is that the APIs for changing the way the camera screen looks were opened up pretty much completely. So you can customize it to make it look any way you want. You can also programmatically engage photo capture, which is something you couldn’t do before either. You could only send the UI up and the user would have to use the normal built-in iPhone UI to capture. So you can do this programmatic data capturing, and you can process those images that come in. But as it turns out, at the same time, shortly after 3.1, the method that a lot of people were using to get the raw data while it was streaming in became a blacklisted function for the review team. So we’ve actually had a lot of trouble as of late getting technology updates through the App Store because the function we’re using is now on a blacklist. Whereas it wasn’t on a blacklist for the last year.”

RedLaser.JPGPowers is hopeful that the next release of the OS will bring official support for the API calls that Red Laser uses, based on the fact that the App Store screeners aren’t taking down existing apps that use the banned APIs. Issues with the iPhone camera sensors pose more of a problem for him. “In terms of science, it’s definitely a really bad sensor, especially if you look at the older iPhone sensor, because it has what’s called a rolling shutter. A rolling shutter means that as you press capture or rather as the camera is capturing video frames or as you capture a frame, the camera then begins to take an image. And it takes a finite number of milliseconds, maybe 50 or so, before it is actually exposed to the entire frame and stored that off into a sensor. Because it’s doing something that’s more like a serial data transfer instead of this all at once parallel capture of the entire frame, what that causes is weird tearing and odd effects like that. For photography, as long as it’s not too dramatic, it’s not a huge deal. For vision processing, it’s a huge deal because it breaks a lot of assumptions that we typically make about the camera. That has gotten better in the 3GS camera, but it’s still not perfect. It is getting better, especially when the camera’s turned on the video mode.”

One thing that has significantly improved with the iPhone 3GS is the actual camera optics. Most people know that the 3G and the first gen phone don’t have autofocus at all. So their optics is just a fixed-focus simple plastic lens that doesn’t allow you to focus up close. For anybody trying to do macro imagery, something up close, you’re just not going to be able to do it on the 3G or the first gen phone. When we set out to build our application, we specifically had to work around that problem. A lot of why our application was successful was because we did focus on that problem. Then in the 3GS, the autofocus mode was enabled which is actually a motor-based autofocus system that can autofocus not only on the center of the image, but also somewhere that you pick specifically. And one more thing is that the autofocus system doesn’t just change the focus, it also changes the exposure, which is something a lot of people don’t notice. “

Another benefit the 3GS has brought to the table for vision processing is the dramatically increased processor speed. “With the 3GS, it’s actually an incredibly powerful device,” says Powers. “So we think right now that there’s actually a lot of power there that hasn’t been exposed. So I mean, there obviously are limits. But I don’t think we’ve seen software that really hits those limits. Honestly, the limits that we’re seeing right now are just in the SDK and what you can and can’t do. One of the things about the iPhone is, as I was alluding to earlier when I talked about previous problems with the Android which are now being addressed, is that you could code at the lowest level on the iPhone, whereas you could not code at the lowest level on the Android. What that means to the iPhone is that you can actually write on ARM assembly if you want.

Almost everyone who’s doing any sort of image processing today on the iPhone isn’t taking advantage of that. We are to a very small extent in Red Laser, but there’s certainly juice that can be extracted by just spending time optimizing for the platform, which is something that the iPhone lets you do. And the other thing to add to that is there are new instructions enabled by the ARM 7 Instruction Set which is used on the 3GS, which wasn’t available previously. And, again, I actually haven’t heard of anyone utilizing those functions yet. So there’s a lot of power there that is yet to be exposed.”

Although the iPhone has been an interesting platform for Powers, he is turning his attention toward the Droid at the moment. “From our perspective, we would love to keep developing our vision software on the iPhone, but because of the fact that the APIs are so restrictive right now and we have no ETA on when that’ll be fixed, we’re actually looking to the Android now, specifically, the new Droid, as an interesting platform for computer vision and image processing in real-time. Again, if it’s not a real-time task, the iPhone’s a great platform. If you can just snap an image, process it, you can do anything on the iPhone that has that characteristic. But if you want to process in real-time, Android is really your best bet right now because of the fact that A, the APIs do let you access the video frames and B, you can now actually write on the metal of the device and write things in C and C++ with the new Android OS which, again, you couldn’t do before. “

Alasdair Allan is approaching the iPhone from a different direction, using it as a way for astronomers to control their telescopes remotely while “sitting in a pub.” While he’s seen some primitive scientific applications of the iPhone for thing such as distributed earthquake monitoring, he thinks that the real benefit of the iPhone over the next few years will be as a visualization tool using AR.

That isn’t to say that he isn’t impressed with the wide variety of sensors available on the iPhone. “You have cellular for the phones. All of the devices have wifi. And most of the devices, apart from the first gen iPod Touch have Bluetooth. You, of course, have audio-in and speaker. The audio-in is actually quite interesting because you can hack that around and actually use it for other purposes. You can use the audio-in as an acoustic modem into an external keyboard for the iPhone, I think that’s in iPhone Hacks, the book. It’s quite interesting. Then on the main sensor side, you’ve got the accelerometer, the magnetometer, the digital compass. It’s got an ambient light sensor, a proximity sensor, a camera, and it’s also got the ability to vibrate. “

Screen shot 2009-11-17 at 11.26.38 PM.pngAccording to Allan, the iPhone sensor that the least people know about is the proximity sensor. “The proximity sensor is an infrared diode. I think it’s actually now a pair of infrared diodes in the iPhone 3G. It’s the reason why when you put your iPhone to your head, the screen goes blank. It basically just uses this infrared LED near the earpiece to detect reflections from large objects, like your head. If you actually take a picture of the iPhone when it’s in call mode, with a normal web cam, you’d actually be able to see right next to the earpiece a sort of glowing red dot which is the proximity sensor. Because, of course, web cam CCDs are sensitive in the infrared so it would actually show up. This was a bit of a scandal early on in the iPhone’s life. The original Google Search app used undocumented SDK call to use this so you could actually speak into the speech search, and Apple and everyone really was very annoyed about this. So they actually enabled it for everyone in the 3.0 SDK.”

Unfortunately, Allan doesn’t know of anyone who has been able to make practical use of the prox sensor, partially because it has such a short range. On the other hand, the newly added magnetometer in the 3GS has opened the door to a host of AR applications. But Allan points out that like any magnetic compass, it can be very sensitive to metal and other magnetic interference in the surrounding environment. “It is very susceptible to local changing magnetic field monitors, CPUs, TVs, anything like that will affect it quite badly.”

Also, he adds, to do any really accurate AR applications, you need to use the sensors in concert. “By default, what you’re measuring, of course, is the ambient magnetic field of the Earth. And that’s how you can use it as a digital compass, because there are tables that will show you how to do deviations from magnetic north to true north, depending on your latitude and longitude. Which is why to do augmented reality apps, you need both the accelerometer and the magnetometer, so it can get your pitch and roll to the device and the GPS to get the latitude and longitude so you know the deviation from true north.”

Allan thinks that although the current sensor suite has limited uses for scientific data capture, things will improve quickly. “I think the science usage is definitely going to grow. When the sensor get slightly more sophisticated than they are today, for instance gyros or you can imagine slightly better accelerometers or light sensors or sort of other things. You could even put LPG or methane gas sensors in there very easily. They’re both sensors that are very small now. You could certainly get going in science doing environmental monitoring, all of that sort of stuff going very easily. And it would quite easily piggyback off sort of social networking ideas as well. I do see the very high-end smartphones contributing to growth in citizen-level science and people in the street getting out to do science and help people build large datasets that can actually be used to predict long-term trends and that sort of thing.”

Powers concurs. “It behaves more like a tricorder than a communicator, right, because certainly voice communicating isn’t all we’re doing anymore. And I think if you take voice communication as a fraction of the utilization of a phone, you’re going to see that there’s definitely a trend that goes down all the time. I don’t think it’ll ever go to zero, but it’ll certainly go to a smaller fraction. At the same time, the sensors are increasing. I would like to see not necessarily barometric or environment measurement sensors, but things like solid-state gyroscopes on phones and maybe a pair of cameras and maybe even different sensors that can allow us to read credit cards and do transactions on the device. I think there’s even some talk of that appearing in the next gen iPhone so you can actually do transactions just by swiping your phone into a register. So I would agree with the assessment that they’re becoming more like tricorder.”

tags: , , , , , , ,