- Governance for the New Class of Worker (Matt Webb) — there is a new class of worker. They’re not inside the company – not benefiting from job security or healthcare – but their livelihoods in large part dependent on it, the transaction cost of moving to a competitor deliberately kept high. Or the worker is, without seeing any of the upside of success, taking on the risk or bearing the cost of the company’s expansion and operation.
- Hidden Code in Your Chipset (Slideshare) — there’s a processor that supervises your processor, and it’s astonishingly fully-featured (to the point of having privileged access to the network and being able to run Java code).
- On Nerd Entitlement — Privilege doesn’t mean you don’t suffer. The best part of 2014 was the tech/net feminist consciousness-raising/uprising. That’s probably the wrong label for it, but bullshit is being called that was ignored years ago. I think we’ve collectively found the next thing we fix that future generations will look back on us and wonder why it went unremarked-upon for so long.
- Understanding Paxos — a simple introduction, with animations, to one of the key algorithms in distributed systems.
The blockchain is like layers in a geological formation — the deeper you go, the more stability you gain.
Editor’s note: this is an excerpt from Chapter 7 of our recently released book Mastering Bitcoin, by Andreas Antonopoulos. You can read the full chapter here. Antonopoulos will be speaking at our upcoming event Bitcoin & the Blockchain, January 27, 2015, in San Francisco. Find out more about the event and reserve your spot here.The blockchain data structure is an ordered back-linked list of blocks of transactions. The blockchain can be stored as a flat file, or in a simple database. The bitcoin core client stores the blockchain metadata using Google’s LevelDB database. Blocks are linked “back,” each referring to the previous block in the chain. The blockchain is often visualized as a vertical stack, with blocks layered on top of each other and the first block serving as the foundation of the stack. The visualization of blocks stacked on top of each other results in the use of terms like “height” to refer to the distance from the first block, and “top” or “tip” to refer to the most recently added block.
Each block within the blockchain is identified by a hash, generated using the SHA256 cryptographic hash algorithm on the header of the block. Each block also references a previous block, known as the parent block, through the “previous block hash” field in the block header. In other words, each block contains the hash of its parent inside its own header. The sequence of hashes linking each block to its parent creates a chain going back all the way to the first block ever created, known as the genesis block. Read more…
In this O'Reilly Data Show Podcast: Ion Stoica talks about the rise of Apache Spark and Apache Mesos.
Three projects from UC Berkeley’s AMPLab have been keenly adopted by industry: Apache Mesos, Apache Spark, and Tachyon. As an early user, it’s been fun to watch Spark go from an academic lab to the most active open source project in big data. In my recent travels, I’ve met Spark users from companies of all sizes and and from many industries. I’ve also spoken with companies that came of age before Spark was available or mature enough, and many are replacing homegrown tools with Spark (Full disclosure: I’m an advisor to Databricks, a start-up commercializing Apache Spark..)
A few months ago, I spoke with UC Berkeley Professor and Databricks CEO Ion Stoica about the early days of Spark and the Berkeley Data Analytics Stack. Ion noted that by the time his students began work on Spark and Mesos, his experience at his other start-up Conviva had already informed some of the design choices:
“Actually, this story started back in 2009, and it started with a different project, Mesos. So, this was a class project in a class I taught in the spring of 2009. And that was to build a cluster management system, to be able to support multiple cluster computing frameworks like Hadoop, at that time, MPI and others. To share the same cluster as the data in the cluster. Pretty soon after that, we thought about what to build on top of Mesos, and that was Spark. Initially, we wanted to demonstrate that it was actually easier to build a new framework from scratch on top of Mesos, and of course we wanted it to be also special. So, we targeted workloads for which Hadoop at that time was not good enough. Hadoop was targeting batch computation. So, we targeted interactive queries and iterative computation, like machine learning. Read more…
Matt Nish-Lapidus on the evolution of product development from pre-industrial through post-industrial eras.
Design is entering its golden age. Now, like never before, the value of the discipline is recognized. This recognition is both a welcome change and a challenge for designers as they move to designing for networked systems. Jon Follett, editor of Designing for Emerging Technologies, recently sat down with Matt Nish-Lapidus, partner and design director at Normative Design, who contributed to the book. Nish-Lapidus discusses the changing role of design and designers in emerging technology.
As Nish-Lapidus describes, we’re witnessing the evolution of product development from one crafts-person, one customer; to a one crafts-person, many customers; to a one craft-person, one product that many people will customize. He explains how the crafted object and the nature of design has changed, beginning with the pre-industrial era:
“If you look at a pair of glasses from the pre-industrial era — anything from Medieval up through the 1700s to 1800s — what you’re seeing is an object that’s the direct expression of a single crafts-person and was made for a single individual to use. It’s a representation of that crafts-person’s view of what glasses should be. They create one, and they sell that one pair. It was often, at the time anyway, also made on commission, so it was rare that they would make large quantities of the same thing and have them sitting around. Pre-industrial, in this way, is an expression of the individual crafts-person involved.”
As we increasingly depend on connected devices, primary concerns will narrow to safety, reliability, and survivability.
Editor’s note: this interview with GE’s Bill Ruh is an excerpt from our recent report, When Hardware Meets Software, by Mike Barlow. The report looks into the new hardware movement, telling its story through the people who are building it. For more stories on the evolving relationship between software and hardware, download the free report.More than one observer has noted that while it’s relatively easy for consumers to communicate directly with their smart devices, it’s still quite difficult for smart devices to communicate directly, or even indirectly, with each other. Bill Ruh, a vice president and corporate officer at GE, drives the company’s efforts to construct an industrial Internet that will enable devices large and small to chat freely amongst themselves, automatically and autonomously. From his perspective, the industrial Internet is a benign platform for helping the world become a quieter, calmer, and less dangerous place.
“In the past, hardware existed without software. You think about the founding of GE and the invention of the light bulb — you turned it on and you turned it off. Zero lines of code. Today, we have street lighting systems with mesh networks and 20 million lines of code,” says Ruh. “Machines used to be completely mechanical. Today, they are part digital. Software is part of the hardware. That opens up huge possibilities.”
A hundred years ago, street lighting was an on-or-off affair. In the future, when a crime is committed at night, a police officer might be able to raise the intensity of the nearby street lights by tapping a smart phone app. This would create near-daylight conditions around a crime scene, and hopefully make it harder for the perpetrators to escape unseen. “Our machines are becoming much more intelligent. With software embedded in them, they’re becoming brilliant,” says Ruh. Read more…
The evolving marketplace is making new data applications and interactions possible.
Here’s a look at some options in the evolving, maturing marketplace of big data components that are making the new applications and interactions we’ve been looking at possible.
First used in social network analysis, graph theory is finding more and more homes in research and business. Machine learning systems can scale up fast with tools like Parameter Server, and the TitanDB project means developers have a robust set of tools to use.
Are graphs poised to take their place alongside relational database management systems (RDBMS), object storage, and other fundamental data building blocks? What are the new applications for such tools?
Inside the black box of algorithms: whither regulation?It’s possible for a machine to create an algorithm no human can understand. Evolutionary approaches to algorithmic optimization can result in inscrutable, yet demonstrably better, computational solutions.
If you’re a regulated bank, you need to share your algorithms with regulators. But if you’re a private trader, you’re under no such constraints. And having to explain your algorithms limits how you can generate them.
As more and more of our lives are governed by code that decides what’s best for us, replacing laws, actuarial tables, personal trainers, and personal shoppers, oversight means opening up the black box of algorithms so they can be regulated.
Years ago, Orbitz was shown to be charging web visitors who owned Apple devices more money than those visiting via other platforms, such as the PC. Only that’s not the whole story: Orbitz’s machine learning algorithms, which optimized revenue per customer, learned that the visitor’s browser was a predictor of their willingness to pay more. Read more…
Why DNA is on the horizon of the design world.
I’ve spent the last couple of years arguing that the barriers between software and the physical world are falling. The barriers between software and the living world are next.
At our Solid Conference last May, Carl Bass, Autodesk’s CEO, described the coming of generative design. Massive computing power, along with frictionless translation between digital and physical through devices like 3D scanners and CNC machines, will radically change the way we design the world around us. Instead of prototyping five versions of a chair through trial and error, you can use a computer to prototype and test a billion versions in a few hours, then fabricate it immediately. That scenario isn’t far off, Bass suggested, and it arises from a fluid relationship between real and virtual.
Biology is headed down the same path: with tools on both the input and output sides getting easier to use, materials getting easier to make, and plenty of computation in the middle, it’ll become the next way to translate between physical and digital. (Excitement has built to the degree that Solid co-chair Joi Ito suggested we change the name of our conference to “Solid and Squishy.”)
I spoke with Andrew Hessel, a distinguished research scientist in Autodesk’s Bio/Nano/Programmable Matter Group, about the promise of synthetic biology (and why Autodesk is interested in it). Hessel says the next generation of synthetic biology will be brought about by a blend of physical and virtual systems that make experimental iteration faster and processes more reliable. Read more…