It's time to place a moratorium on negativity and start working toward book publishing's bright future.
Editor’s note: this piece originally appeared on Medium; it is cross-posted here with permission. The writer is an O’Reilly employee, but he is expressing his personal views. We love his optimism about the future and wanted to share it with the Radar audience.
“THAT COMPANY is destroying my P&L, the entire book industry, and the fabric of civilized society.”
“I really like their free, two-day shipping, though.”
There’s a lot of tsoris in the publishing community right now over ebooks. Much of it has something to do with THAT COMPANY WITH THE WEBSITE THAT SELLS ALL THE THINGS, how THAT COMPANY has a stranglehold on the book market, how it’s devaluing our literary canon, how it has publishers right where it wants them.
But we’re not just cranky about THAT COMPANY. Other jeremiads include — but are not limited to — the painfully slow adoption curve of EPUB 3, the demise of beloved sites like Readmill, the failure of “enhanced” ebooks to gain traction, sundry ereader feculence, stagnating ebook sales, and sideloading.
I’m a cynic by nature, and count wallowing among my favorite hobbies, but after half a decade as a software engineer in the digital publishing space, even I’ve had enough and am issuing a moratorium on the negativity! Instead, I want to talk about some of the promising trends I’ve seen develop over the past year that foretell a bright future for the digital book. Forthwith: Five reasons for optimism about the future of ebooks.
Government sensor networks can streamline processes, cut labor costs, and improve services.
It’s not news to anyone who works in government that we live in a time of ever-tighter budgets and ever-increasing needs. The 2013 federal shutdown only highlighted this precarious situation: government finds it increasingly difficult to summon the resources and manpower needed to meet its current responsibilities, yet faces new ones after each Congressional session.
Sensor networks are an important emerging technology that some areas of government already are implementing to bridge the widening gap between the demand to reduce costs and the demand to improve services. The Department of Defense, for instance, uses RFID chips to monitor its supply chain more accurately, while the U.S. Geological Survey employs sensors to remotely monitor the bacterial levels of rivers and lakes in real time. Additionally, the General Services Administration has begun using sensors to measure and verify the energy efficiency of “green” buildings (PDF), and the Department of Transportation relies on sensors to monitor traffic and control traffic signals and roadways. All of which is productive, but more needs to be done. Read more…
Announcing a new series delving into deep learning and the inner workings of neural networks.
Editor’s note: this post is part of our Intelligence Matters investigation.
When I first ran across the results in the Kaggle image-recognition competitions, I didn’t believe them. I’ve spent years working with machine vision, and the reported accuracy on tricky tasks like distinguishing dogs from cats was beyond anything I’d seen, or imagined I’d see anytime soon. To understand more, I reached out to one of the competitors, Daniel Nouri, and he demonstrated how he used the Decaf open-source project to do so well. Even better, he showed me how he was quickly able to apply it to a whole bunch of other image-recognition problems we had at Jetpac, and produce much better results than my conventional methods.
I’ve never encountered such a big improvement from a technique that was largely unheard of just a couple of years before, so I became obsessed with understanding more. To be able to use it commercially across hundreds of millions of photos, I built my own specialized library to efficiently run prediction on clusters of low-end machines and embedded devices, and I also spent months learning the dark arts of training neural networks. Now I’m keen to share some of what I’ve found, so if you’re curious about what on earth deep learning is, and how it might help you, I’ll be covering the basics in a series of blog posts here on Radar, and in a short upcoming ebook. Read more…
O’Reilly editors explore the ideas and influences that are poised to break through.
Foo Camp, our annual gathering in Sebastopol, Calif., brings together people we know and admire, and those we’d like to know better. It’s also a way for us to discover the ideas emerging at the edges of technology, business, art, science, and society.
The latest Foo Camp wrapped up recently, so we pooled our notes and collected the major trends we spotted across sessions and conversations. Consider the following an early look at big things to come. Read more…
Business users are becoming more comfortable with graph analytics.
The rise of sensors and connected devices will lead to applications that draw from network/graph data management and analytics. As the number of devices surpasses the number of people — Cisco estimates 50 billion connected devices by 2020 — one can imagine applications that depend on data stored in graphs with many more nodes and edges than the ones currently maintained by social media companies.
This means that researchers and companies will need to produce real-time tools and techniques that scale to much larger graphs (measured in terms of nodes & edges). I previously listed tools for tapping into graph data, and I continue to track improvements in accessibility, scalability, and performance. For example, at the just-concluded Spark Summit, it was apparent that GraphX remains a high-priority project within the Spark1 ecosystem.
The Lambda Architecture has its merits, but alternatives are worth exploring.
Nathan Marz wrote a popular blog post describing an idea he called the Lambda Architecture (“How to beat the CAP theorem“). The Lambda Architecture is an approach to building stream processing applications on top of MapReduce and Storm or similar systems. This has proven to be a surprisingly popular idea, with a dedicated website and an upcoming book. Since I’ve been involved in building out the real-time data processing infrastructure at LinkedIn using Kafka and Samza, I often get asked about the Lambda Architecture. I thought I would describe my thoughts and experiences.
What is a Lambda Architecture and how do I become one?
The Lambda Architecture looks something like this: