"networking" entries

Four short links: 17 December 2015

Four short links: 17 December 2015

Structured Image Concepts, Google's SDN, Lightbulb DeDRMing, and EFF SF

  1. Visual Genomea data set, a knowledge base, an ongoing effort to connect structured image concepts to language.
  2. Google’s Software Defined Networking[What was the biggest risk you faced rolling out the network? …] we were breaking the fate-sharing principle—which is to say we were putting ourselves in a situation where either the controller could fail without the switch failing, or the switch could fail without the controller failing. That generally leads to big problems in distributed computing, as many people learned the hard way once remote procedure calls became a dominant paradigm.
  3. Philips Backtrack on Lightbulb DRMIn view of the sentiment expressed by our customers, we have decided to reverse the software upgrade so that lights from other brands continue to work as they did before with the Philips Hue system.
  4. Pwning Tomorrow — EFF Publishes SF Anthology. You can expect liberties and freedoms to feature.
Four short links: 13 November 2015

Four short links: 13 November 2015

CEO Optimism, Fibbing Networking, GPU TensorFlow, and GUI Font Design

  1. CEO OptimismCEOs always act on leading indicators of good news, but only act on lagging indicators of bad news. (Andy Grove)
  2. Fibbing — lie to your router table to get the most from your network. Clever!
  3. TensorFlow for GPUs — Amazon image of TensorFlow ready to run on their GPU compute cloud.
  4. metaflop — UI for metafont that makes it super-easy to design your own sweet-looking font. (via BoingBoing)
Four short links: 3 November 2015

Four short links: 3 November 2015

Quantum Architectures, Gig Economy Size, Networking Truths, and Twitter's Decay

  1. Australians Invent Architecture for Full-Scale Quantum Computer (IEEE) — still research paper, so I’ll believe it when it can glitch Hangouts just like a real computer.
  2. How Big is the Gig Economy? (Medium) — this is one example in which the Labor Department and Bureau of Labor Statistics really have shirked their responsibility to try and assess the size and growth of this dynamic shift to our economy.
  3. The Twelve Networking Truths — RFC1925 is channeling the epigram-leaking protagonist of Robert Heinlein’s Time Enough for Love. It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it. This is true for most areas of life: generally easier to make it someone else’s problem than to solve it.
  4. The Decay of Twitter (The Atlantic) — In other words, on Twitter, people say things that they think of as ephemeral and chatty. Their utterances are then treated as unequivocal political statements by people outside the conversation. Because there’s a kind of sensationalistic value in interpreting someone’s chattiness in partisan terms, tweets “are taken up as magnum opi to be leapt upon and eviscerated, not only by ideological opponents or threatened employers but by in-network peers.”

Swarm v. Fleet v. Kubernetes v. Mesos

Comparing different orchestration tools.

Buy Using Docker Early Release.

Buy Using Docker Early Release.

Most software systems evolve over time. New features are added and old ones pruned. Fluctuating user demand means an efficient system must be able to quickly scale resources up and down. Demands for near zero-downtime require automatic fail-over to pre-provisioned back-up systems, normally in a separate data centre or region.

On top of this, organizations often have multiple such systems to run, or need to run occasional tasks such as data-mining that are separate from the main system, but require significant resources or talk to the existing system.

When using multiple resources, it is important to make sure they are efficiently used — not sitting idle — but can still cope with spikes in demand. Balancing cost-effectiveness against the ability to quickly scale is difficult task that can be approached in a variety of ways.

All of this means that the running of a non-trivial system is full of administrative tasks and challenges, the complexity of which should not be underestimated. It quickly becomes impossible to look after machines on an individual level; rather than patching and updating machines one-by-one they must be treated identically. When a machine develops a problem it should be destroyed and replaced, rather than nursed back to health.

Various software tools and solutions exist to help with these challenges. Let’s focus on orchestration tools, which help make all the pieces work together, working with the cluster to start containers on appropriate hosts and connect them together. Along the way, we’ll consider scaling and automatic failover, which are important features.

Read more…

Batten down the hatches

Four core questions that every security team must ask itself to develop its strategy in dealing with attacks.

Massive software vulnerabilities have been surfacing with increasingly high visibility, and the world’s computer administrators are repeatedly thrust into the cycle of confusion, anxiety, patching and waiting for the Next Big One. The list of high profile vulnerabilities in widely used software packages and platforms continues to rise. A recent phenomenon has researchers borrowing from the National Hurricane Center’s tradition, to introduce a vulnerability with a formal name. Similar to hurricanes and weather scientists, security researchers, analysts, and practitioners observe and track vulnerabilities as more details unfold and the true extent of the risk (and subsequent damage) is known.

Take for example the Android vulnerability released at the beginning of August, 20151. This vulnerability, named “Stagefright” after its eponymous application, can lead to remote code execution (RCE) through several vectors including MMS, Email, HTTP, Media applications, Bluetooth, and more. These factors coupled with the fact that at its release there were no approved patches available for upwards of 95% of the world’s mobile Android footprint means the vulnerability is serious — especially to any organization with a significant Android population.

Read more…

Four short links: 6 July 2015

Four short links: 6 July 2015

DeepDream, In-Flight WiFi, Computer Vision in Preservation, and Testing Distributed Systems

  1. DeepDream — the software that’s been giving the Internet acid-free trips.
  2. In-Flight WiFi Business — numbers and context for why some airlines (JetBlue) have fast free in-flight wifi while others (Delta) have pricey slow in-flight wifi. Four years ago ViaSat-1 went into geostationary orbit, putting all other broadband satellites to shame with 140 Gbps of total capacity. This is the Ka-band satellite that JetBlue’s fleet connects to, and while the airline has to share that bandwidth with homes across of North America that subscribe to ViaSat’s Excede residential broadband service, it faces no shortage of capacity. That’s why JetBlue is able to deliver 10-15 Mbps speeds to its passengers.
  3. British Library Digitising Newspapers (The Guardian) — as well as photogrammetry methods used in the Great Parchment Book project, Terras and colleagues are exploring the potential of a host of techniques, including multispectral imaging (MSI). Inks, pencil marks, and paper all reflect, absorb, or emit particular wavelengths of light, ranging from the infrared end of the electromagnetic spectrum, through the visible region and into the UV. By taking photographs using different light sources and filters, it is possible to generate a suite of images. “We get back this stack of about 40 images of the [document] and then we can use image-processing to try to see what is in [some of them] and not others,” Terras explains.
  4. Testing a Distributed System (ACM) — This article discusses general strategies for testing distributed systems as well as specific strategies for testing distributed data storage systems.

How to leverage the browser cache with a CDN

An introduction to multi-level caching.

maze

Since a content delivery network (CDN) is essentially a cache, you might be tempted not to make use of the cache in the browser, to avoid complexity. However, each cache has its own advantages that the other does not provide. In this post I will explain what the advantages of each are, and how to combine the two for the most optimal performance of your website.

Why use both?

While CDNs do a good job of delivering assets very quickly, they can’t do much about users who are out in the boonies and barely have a single bar of reception on their phone. As a matter of fact, in the US, the 95th percentile for the round trip time (RTT) to all CDNs is well in excess of 200 milliseconds, according to Cedexis reports. That means at least 5% of your users, if not more, are likely to have a slow experience with your website or application. For reference, the 50th percentile, or median, RTT is around 45 milliseconds.

So why bother using a CDN at all? Why not just rely on the browser cache?

Read more…

Four short links: 30 April 2015

Four short links: 30 April 2015

Managing Complex Data Projects, Graphical Linear Algebra, Consistent Hashing, and NoTCP Manifesto

  1. More Tools for Managing and Reproducing Complex Data Projects (Ben Lorica) — As I survey the landscape, the types of tools remain the same, but interfaces continue to improve, and domain specific languages (DSLs) are starting to appear in the context of data projects. One interesting trend is that popular user interface models are being adapted to different sets of data professionals (e.g. workflow tools for business users).
  2. Graphical Linear Algebra — or “Graphical The-Subject-That-Kicked-Nat’s-Butt” as I read it.
  3. Consistent Hashing: A Guide and Go Implementation — easy-to-follow article (and source).
  4. NoTCP Manifesto — a nice summary of the reasons to build custom protocols over UDP, masquerading as church-nailed heresy. Today’s heresy is just the larval stage of tomorrow’s constricting orthodoxy.

Managed DNS considered harmful

Outsourcing your DNS is not a magic bullet.

bridge

There is frequently a tendency toward letting one’s guard down when it comes to threats to your IT systems. Absent an immediate “hair-on-fire” situation, we may relax and assume all is well. Yet malicious activity such as hacking, phishing, malware, and DDoS attacks never stop accelerating in terms of frequency and intensity.

So it’s important to have a “Plan B” DNS solution in place and ready before a crisis hits. That way, even if you’re taken off guard, you still have a backup plan and can respond appropriately.

DNS is one of those things nobody really thinks about, until it stops working. The first time easyDNS went off the air on April 15, 2003, it induced a type of existential crisis in me. That summer, after meditating intensely on the situation, I came away with the conclusion that the centralized managed DNS model, as we understood it then, was doomed.

My response at the time was a proposal to pivot to a DNS appliance with decentralized deployments, but centralized monitoring and management. That concept was promptly shot down my co-founders and we’ve kept on with the centralized, hosted DNS model to this day.

The core problem is this: there are many reasons to elect to outsource your DNS to a managed DNS provider. Those reasons include:

Read more…

Full-stack tensions on the Web

How much do you need to know?

Vista_de_la_Biblioteca_VasconcelosI expected that CSSDevConf would be primarily a show about front-end work, focused on work in clients and specifically in browsers. I kept running into conversations, though, about the challenges of moving between the front and back end, the client and the server side. Some were from developers suddenly told that they had to become “full-stack developers” covering the whole spectrum, while others were from front-end engineers suddenly finding a flood of back-end developers tinkering with the client side of their applications. “Full-stack” isn’t always a cheerful story.

In the early days of the Web, “full-stack” was normal. While there were certainly people who focused on running web servers or designing sites as beautiful as the technology would allow, there were lots of webmasters who knew how to design a site, write HTML, manage a server, and maybe write some CGI code for early applications.

Formal separation of concerns among HTML, CSS, and JavaScript made it easier to share responsibilities among specialists. As the dot-com boom proceeded, specialization accelerated, with dedicated designers, programmers, and sysadmins coming to the work. Perhaps there were too many titles.

Even as the bust set in, specialization remained the trend because Web projects — especially on the server side — had grown far more complicated. They weren’t just a server and a few scripts, but a complete stack, including templates, logic, and usually a database. Whether you preferred the LAMP stack, a Microsoft ASP stack, or perhaps Java servlets and JSP, the server side rapidly became its own complex arena. Intranet development in particular exploded as a way to build server-based applications that could (cheaply) connect data sources to users on multiple platforms. Writing web apps was faster and cheaper than writing desktop apps, with more tolerance for platform variation.
Read more…