- Google Knowledge Vault and Topic Modeling — recap of talks by Google and Facebook staff about how they use their knowledge graphs. I found this super-interesting.
- djinni — A tool for generating cross-language type declarations and interface bindings.
- monit — a small Open Source utility for managing and monitoring Unix systems. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.
- perf-tooling — List of performance analysis, monitoring and optimization tools.
Google requires quid for its quo, but it offers something many don’t: user data access. And not one shred of data, and certainly not one iota of a user's private life, is known to the company without the user's explicit, active, consent. Read more...
Network neutrality is about treating all kinds of traffic equally — throttling competition equates to extortion.
I’d like to make a few very brief points about net neutrality. For most readers of Radar, there’s probably nothing new here, but they address confusions that I’ve seen.
- Network neutrality isn’t about the bandwidth that Internet service providers deliver to your home. ISPs can charge more for more bandwidth, same as always.
- Nor is network neutrality about the bandwidth that Internet service providers deliver to information providers. Again, ISPs can charge more for more bandwidth, same as always. You’d better believe that Google pays a lot more for Internet service than your local online store.
- Nor is network neutrality about ISPs dealing with congestion. Network providers have always dealt with congestion — in the worst case, by dropping traffic. Remember the “fast busy” signal on the phone? That’s the network dealing with congestion.
- Network neutrality is entirely about treating all kinds of traffic equally. Video is the same as voice, the same as Facebook, the same as Amazon. Your ISP cannot penalize video traffic (or some other kind of traffic) because they’d like to get into that business or because they’re already in that business. In other words: when you buy Internet connectivity, you can use it for whatever you want. Your provider can’t tell you what kind of business to be in.
How a small and passionate team used modern techniques to shift a business on a short timeline.
Over the past year, I assisted in creating an application that helped shift a major part of IBM to a software-as-a-service (SaaS) model. I did this with the help of a small but excellent development team that was inspired by the culture and practices of web startups. To be clear, it wasn’t easy – changing how we worked led to frequent friction and conflict – but in the end it worked, and we made a difference.
In mid-2013, the IBM Service Management business and engineering leaders decided to make a big bet on moving our software to the cloud. Traditionally we have sold “on premises” software products. These are software products that a customer buys, downloads, and installs on their own equipment, in their own data centers and facilities. Although we love the on-premises business, we realized that cloud delivery of software is also a great option, and as our customers evolved to a hybrid on-premises / cloud future, we needed to be there to help them.
How neuroscience is benefiting from distributed computing — and how computing might learn from neuroscience.
When we think about big data, we usually think about the web: the billions of users of social media, the sensors on millions of mobile phones, the thousands of contributions to Wikipedia, and so forth. Due to recent innovations, web-scale data can now also come from a camera pointed at a small, but extremely complex object: the brain. New progress in distributed computing is changing how neuroscientists work with the resulting data — and may, in the process, change how we think about computation. Read more…
Oliver Medvedik on the grassroots future of biohacking and the problems with government overreach.
Whither thou goest, synthetic biology? First, let’s put aside the dystopian scenarios of nasty modified viruses escaping from the fermentor Junior has jury-rigged in his bedroom lab. Designing virulent microbes is well beyond the expertise and budgets of homegrown biocoders.
“Moreover, it’s extremely difficult to ‘improve’ on the lethality of nature,” says Oliver Medvedik, a visiting assistant professor at The Cooper Union for the Advancement of Science and Art and the assistant director of the Maurice Kanbar Center for Biomedical Engineering. “The pathogens that already exist are more legitimate cause for worry.” Read more…
Tim O'Reilly and Carl Bass discuss the future of making things, and Astro Teller on Google X's approach to solving big problems.
I recently lamented the lag in innovation in relation to the speed of technological advancements — do we really need a connected toaster that will sell itself if neglected? Subsequently, I had a conversation with Josh Clark that made me rethink that position; Clark pointed out that play is an important aspect of innovation, and that such whimsical creations as drum pants could ultimately lead to more profound innovations.
In the first segment of this podcast episode, Tim O’Reilly and Autodesk CEO Carl Bass have a wide-ranging discussion about the future of making things. Bass notes that innovation tends to start by “looking at the rear window”:
“The first naïve response is to take a new technology and do the old thing with it. It takes a while until you can start reimagining things…the first thing that you need is this new tool set in software, hardware, and materials, but the more important thing — and the more difficult thing, obviously — is a new mind-set. How are you going to think about this problem differently? How are you going to reimagine what you can do? That’s the exciting part.”