"networking" entries

Four short links: 6 July 2015

Four short links: 6 July 2015

DeepDream, In-Flight WiFi, Computer Vision in Preservation, and Testing Distributed Systems

  1. DeepDream — the software that’s been giving the Internet acid-free trips.
  2. In-Flight WiFi Business — numbers and context for why some airlines (JetBlue) have fast free in-flight wifi while others (Delta) have pricey slow in-flight wifi. Four years ago ViaSat-1 went into geostationary orbit, putting all other broadband satellites to shame with 140 Gbps of total capacity. This is the Ka-band satellite that JetBlue’s fleet connects to, and while the airline has to share that bandwidth with homes across of North America that subscribe to ViaSat’s Excede residential broadband service, it faces no shortage of capacity. That’s why JetBlue is able to deliver 10-15 Mbps speeds to its passengers.
  3. British Library Digitising Newspapers (The Guardian) — as well as photogrammetry methods used in the Great Parchment Book project, Terras and colleagues are exploring the potential of a host of techniques, including multispectral imaging (MSI). Inks, pencil marks, and paper all reflect, absorb, or emit particular wavelengths of light, ranging from the infrared end of the electromagnetic spectrum, through the visible region and into the UV. By taking photographs using different light sources and filters, it is possible to generate a suite of images. “We get back this stack of about 40 images of the [document] and then we can use image-processing to try to see what is in [some of them] and not others,” Terras explains.
  4. Testing a Distributed System (ACM) — This article discusses general strategies for testing distributed systems as well as specific strategies for testing distributed data storage systems.
Comment

How to leverage the browser cache with a CDN

An introduction to multi-level caching.

maze

Since a content delivery network (CDN) is essentially a cache, you might be tempted not to make use of the cache in the browser, to avoid complexity. However, each cache has its own advantages that the other does not provide. In this post I will explain what the advantages of each are, and how to combine the two for the most optimal performance of your website.

Why use both?

While CDNs do a good job of delivering assets very quickly, they can’t do much about users who are out in the boonies and barely have a single bar of reception on their phone. As a matter of fact, in the US, the 95th percentile for the round trip time (RTT) to all CDNs is well in excess of 200 milliseconds, according to Cedexis reports. That means at least 5% of your users, if not more, are likely to have a slow experience with your website or application. For reference, the 50th percentile, or median, RTT is around 45 milliseconds.

So why bother using a CDN at all? Why not just rely on the browser cache?

Read more…

Comment
Four short links: 30 April 2015

Four short links: 30 April 2015

Managing Complex Data Projects, Graphical Linear Algebra, Consistent Hashing, and NoTCP Manifesto

  1. More Tools for Managing and Reproducing Complex Data Projects (Ben Lorica) — As I survey the landscape, the types of tools remain the same, but interfaces continue to improve, and domain specific languages (DSLs) are starting to appear in the context of data projects. One interesting trend is that popular user interface models are being adapted to different sets of data professionals (e.g. workflow tools for business users).
  2. Graphical Linear Algebra — or “Graphical The-Subject-That-Kicked-Nat’s-Butt” as I read it.
  3. Consistent Hashing: A Guide and Go Implementation — easy-to-follow article (and source).
  4. NoTCP Manifesto — a nice summary of the reasons to build custom protocols over UDP, masquerading as church-nailed heresy. Today’s heresy is just the larval stage of tomorrow’s constricting orthodoxy.
Comment: 1

Managed DNS considered harmful

Outsourcing your DNS is not a magic bullet.

bridge

There is frequently a tendency toward letting one’s guard down when it comes to threats to your IT systems. Absent an immediate “hair-on-fire” situation, we may relax and assume all is well. Yet malicious activity such as hacking, phishing, malware, and DDoS attacks never stop accelerating in terms of frequency and intensity.

So it’s important to have a “Plan B” DNS solution in place and ready before a crisis hits. That way, even if you’re taken off guard, you still have a backup plan and can respond appropriately.

DNS is one of those things nobody really thinks about, until it stops working. The first time easyDNS went off the air on April 15, 2003, it induced a type of existential crisis in me. That summer, after meditating intensely on the situation, I came away with the conclusion that the centralized managed DNS model, as we understood it then, was doomed.

My response at the time was a proposal to pivot to a DNS appliance with decentralized deployments, but centralized monitoring and management. That concept was promptly shot down my co-founders and we’ve kept on with the centralized, hosted DNS model to this day.

The core problem is this: there are many reasons to elect to outsource your DNS to a managed DNS provider. Those reasons include:

Read more…

Comment

Full-stack tensions on the Web

How much do you need to know?

Vista_de_la_Biblioteca_VasconcelosI expected that CSSDevConf would be primarily a show about front-end work, focused on work in clients and specifically in browsers. I kept running into conversations, though, about the challenges of moving between the front and back end, the client and the server side. Some were from developers suddenly told that they had to become “full-stack developers” covering the whole spectrum, while others were from front-end engineers suddenly finding a flood of back-end developers tinkering with the client side of their applications. “Full-stack” isn’t always a cheerful story.

In the early days of the Web, “full-stack” was normal. While there were certainly people who focused on running web servers or designing sites as beautiful as the technology would allow, there were lots of webmasters who knew how to design a site, write HTML, manage a server, and maybe write some CGI code for early applications.

Formal separation of concerns among HTML, CSS, and JavaScript made it easier to share responsibilities among specialists. As the dot-com boom proceeded, specialization accelerated, with dedicated designers, programmers, and sysadmins coming to the work. Perhaps there were too many titles.

Even as the bust set in, specialization remained the trend because Web projects — especially on the server side — had grown far more complicated. They weren’t just a server and a few scripts, but a complete stack, including templates, logic, and usually a database. Whether you preferred the LAMP stack, a Microsoft ASP stack, or perhaps Java servlets and JSP, the server side rapidly became its own complex arena. Intranet development in particular exploded as a way to build server-based applications that could (cheaply) connect data sources to users on multiple platforms. Writing web apps was faster and cheaper than writing desktop apps, with more tolerance for platform variation.
Read more…

Comments: 5
Four short links: 29 January 2015

Four short links: 29 January 2015

Security Videos, Network Simulation, UX Book, and Profit in Perspective

  1. ShmooCon 2015 Videos — videos to security talks from ShmooCon 2015.
  2. Comcast (Github) — Comcast is a tool designed to simulate common network problems like latency, bandwidth restrictions, and dropped/reordered/corrupted packets. On BSD-derived systems such as OSX, we use tools like ipfw and pfctl to inject failure. On Linux, we use iptables and tc. Comcast is merely a thin wrapper around these controls.
  3. The UX ReaderThis ebook is a collection of the most popular articles from our [MailChimp] UX Newsletter, along with some exclusive content.
  4. Bad AssumptionsApple lost more money to currency fluctuations than Google makes in a quarter.
Comment
Four short links: 29 October 2014

Four short links: 29 October 2014

Tweet Parsing, Focus and Money, Challenging Open Data Beliefs, and Exploring ISP Data

  1. TweetNLP — CMU open source natural language parsing tools for making sense of Tweets.
  2. Interview with Google X Life Science’s Head (Medium) — I will have been here two years this March. In nineteen months we have been able to hire more than a hundred scientists to work on this. We’ve been able to build customized labs and get the equipment to make nanoparticles and decorate them and functionalize them. We’ve been able to strike up collaborations with MIT and Stanford and Duke. We’ve been able to initiate protocols and partnerships with companies like Novartis. We’ve been able to initiate trials like the baseline trial. This would be a good decade somewhere else. The power of focus and money.
  3. Schooloscope Open Data Post-MortemThe case of Schooloscope and the wider question of public access to school data challenges the belief that sunlight is the best disinfectant, that government transparency would always lead to better government, better results. It challenges the sentiments that see data as value-neutral and its representation as devoid of politics. In fact, access to school data exposes a sharp contrast between the private interest of the family (best education for my child) and the public interest of the government (best education for all citizens).
  4. M-Lab Observatory — explorable data on the data experience (RTT, upload speed, etc) across different ISPs in different geographies over time.
Comment

How to identify a scalable IoT network topology

Range, power consumption, scalability, and bandwidth dominate technology decisions.

HVAC Air Group in an airlift. Source: Hvac en kabelgoot

HVAC Air Group in an airlift. Source: Hvac en kabelgoot

Editor’s note: this article is part of a series exploring the role of networking in the Internet of Things.

Three types of networking topologies are utilized in the Internet-of-Things: point-to-point, star, and mesh networking. To provide a way to explore the attributes and capabilities of each of these topologies, we defined a hypothetical (but realistic) application in the building monitoring and energy management space and methodically defined its networking requirements.

Let’s pull it all together to make a network selection for our building monitoring application. As described previously, the application will monitor, analyze, and optimize energy usage throughout the user’s properties. To accomplish this, monitoring and control points need to be deployed throughout each building, including occupancy and temperature sensors. Sensor data will be aggregated back to a central building automation panel located in each building. A continuous collection of data will provide a higher resolution of temperature and occupancy information, thus rendering better insight into HVAC performance and building utilization patterns. Comparison of energy utilization throughout the portfolio of properties allows lower performing buildings to be flagged.
Read more…

Comment

Mesh networking extends IoT reach

A suitable network topology for building automation.

XBee_Series_2_with_Whip_AntennaEditor’s note: this article is part of a series exploring the role of networking in the Internet of Things.

Today we are going to consider the attributes of wireless mesh networking, particularly in the context of our building monitoring and energy application.

A host of new mesh networking technologies came upon the scene in the mid-2000s through start-up ventures such as Millennial Net, Ember, Dust Networks, and others. The mesh network topology is ideally suited to provide broad area coverage for low-power, low-data rate applications found in application areas like industrial automation, home and commercial building automation, medical monitoring, and agriculture.

Read more…

Comment: 1

The role of Wi-Fi in the Internet of Things

When to use a star network.

Photo: Robo56This article is part of a series exploring the role of networking in the Internet of Things.

In my previous post we evaluated a point-to-point networking technology, specifically Bluetooth, to determine its applicability to our building monitoring and energy application. In this post, we will evaluate the use of a star networking technology to meet our application needs.

A star network consists of one central hub that establishes a point-to-point network connection with all other nodes in the network (e.g. sensor nodes). This central hub acts as a common connection point for all nodes in the network. All peripheral nodes may therefore communicate with all others by transmitting to, and receiving from, the central hub only.

Today, Wi-Fi is by far the most commonly used wireless star topology. It is deployed widely throughout many environments, providing near ubiquitous internet access in facilities such as schools, campuses, office buildings, lodging, residential homes and so on. The term Wi-Fi is not a standard, but a term trademarked by The Wi-Fi Alliance and covering a number of IEEE 802.11 standards along with details of implementation.

As in past posts, let’s take a closer look at the technology and evaluate WI-Fi’s capabilities against the nine key application attributes that characterized our building monitoring and energy management application.

Read more…

Comment: 1