- How Might We Code in VR? — caught my eye because I’m looking for ideas on how to think about interaction design in the holoculus world.
- Git Workflows for Pros — non-developers don’t understand how important this is to productivity.
- All Programming is Bookkeeping — approach programming as a bookkeeping problem: checks and balances.
- Why I Am Not a Maker (Deb Chachra) — The problem is the idea that the alternative to making is usually not doing nothing — it’s almost always doing things for and with other people, from the barista to the Facebook community moderator to the social worker to the surgeon. Describing oneself as a maker — regardless of what one actually or mostly does — is a way of accruing to oneself the gendered, capitalist benefits of being a person who makes products.
For maximum business value, big data applications have to involve multiple Hadoop ecosystem components.
Data is deluging today’s enterprise organizations from ever-expanding sources and in ever-expanding formats. To gain insight from this valuable resource, organizations have been adopting Apache Hadoop with increasing momentum. Now, the most successful players in big data enterprise are no longer only utilizing Hadoop “core” (i.e., batch processing with MapReduce), but are moving toward analyzing and solving real-world problems using the broader set of tools in an enterprise data hub (often interactively) — including components such as Impala, Apache Spark, Apache Kafka, and Search. With this new focus on workload diversity comes an increased demand for developers who are well-versed in using a variety of components across the Hadoop ecosystem.
Due to the size and variety of the data we’re dealing with today, a single use case or tool — no matter how robust — can camouflage the full, game-changing potential of Hadoop in the enterprise. Rather, developing end-to-end applications that incorporate multiple tools from the Hadoop ecosystem, not just the Hadoop core, is the first step toward activating the disparate use cases and analytic capabilities of which an enterprise data hub is capable. Whereas MapReduce code primarily leverages Java skills, developers who want to work on full-scale big data engineering projects need to be able to work with multiple tools, often simultaneously. An authentic big data applications developer can ingest and transform data using Kite SDK, write SQL queries with Impala and Hive, and create an application GUI with Hue. Read more…
Understanding the value of the blockchain above and beyond bitcoin.
Editor’s note: Lorne Lantz is a program co-chair for our O’Reilly Radar Summit: Bitcoin & the Blockchain on January 27, 2015, in San Francisco. For more on the program and for registration information, visit the Bitcoin & the Blockchain event website.
I remember the first time I heard about bitcoin. It was June 2012, and I was invited to a bitcoin meetup. The whole time I was sitting there, I thought these were a bunch of computer geeks playing around with nerd money.
At the same time, I felt excited about the possibilities. If what the bitcoin believers were saying was true, it could become something very big. When I took a closer look, I realized why it could be so groundbreaking: decentralization.
Unlike other currencies and payment networks, bitcoin is not controlled by a bank, government, or financial institution. Instead, thousands of computers around the world verify transactions and manage a global decentralized ledger. This innovative technology is called the blockchain, and it provides a unique pathway that allows — for the first time — many computers that don’t trust each other to achieve consensus. In bitcoin’s case, they are achieving consensus on updates to the global ledger. Read more…
A look at the stumbling blocks to blockchain scalability and some high-level technical solutions.
Author note: Vitalik Buterin contributed to this article.
Editor’s note: Kieren James-Lubin is a program co-chair for our O’Reilly Radar Summit: Bitcoin & the Blockchain on January 27, 2015, in San Francisco. For more on the program and for registration information, visit the Bitcoin & the Blockchain event website.
“I have no worries that bitcoin can scale, and the simple reason for that is that I know that IPv4 can’t, and yet I use it every day.”
The issue of bitcoin scalability and the phrase “blockchain scalability” are often seen in technical discussions of the bitcoin protocol. Will the requirements of recording every bitcoin transaction in the blockchain compromise its security (because fewer users will keep a copy of the whole blockchain) or its ability to handle a great number of transactions (because new blocks on which transactions can be recorded are only produced at limited intervals)? In this article, we’ll explore several meanings of “blockchain scalability” and some high-level technical solutions to the issue.
The three main stumbling blocks to blockchain scalability are:
- The tendency toward centralization with a growing blockchain: the larger the blockchain grows, the larger the requirements become for storage, bandwidth, and computational power that must be spent by “full nodes” in the network, leading to a risk of much higher centralization if the blockchain becomes large enough that only a few nodes are able to process a block.
- The bitcoin-specific issue that the blockchain has a built-in hard limit of 1 megabyte per block (about 10 minutes), and removing this limit requires a “hard fork” (ie. backward-incompatible change) to the bitcoin protocol.
- The high processing fees currently paid for bitcoin transactions, and the potential for those fees to increase as the network grows. We won’t discuss this too much, but see here for more detail.
We’ll consider these first two issues in detail. Read more…