- DeepDream — the software that’s been giving the Internet acid-free trips.
- In-Flight WiFi Business — numbers and context for why some airlines (JetBlue) have fast free in-flight wifi while others (Delta) have pricey slow in-flight wifi. Four years ago ViaSat-1 went into geostationary orbit, putting all other broadband satellites to shame with 140 Gbps of total capacity. This is the Ka-band satellite that JetBlue’s fleet connects to, and while the airline has to share that bandwidth with homes across of North America that subscribe to ViaSat’s Excede residential broadband service, it faces no shortage of capacity. That’s why JetBlue is able to deliver 10-15 Mbps speeds to its passengers.
- British Library Digitising Newspapers (The Guardian) — as well as photogrammetry methods used in the Great Parchment Book project, Terras and colleagues are exploring the potential of a host of techniques, including multispectral imaging (MSI). Inks, pencil marks, and paper all reflect, absorb, or emit particular wavelengths of light, ranging from the infrared end of the electromagnetic spectrum, through the visible region and into the UV. By taking photographs using different light sources and filters, it is possible to generate a suite of images. “We get back this stack of about 40 images of the [document] and then we can use image-processing to try to see what is in [some of them] and not others,” Terras explains.
- Testing a Distributed System (ACM) — This article discusses general strategies for testing distributed systems as well as specific strategies for testing distributed data storage systems.
Four core questions that every security team must ask itself to develop its strategy in dealing with attacks.
Massive software vulnerabilities have been surfacing with increasingly high visibility, and the world’s computer administrators are repeatedly thrust into the cycle of confusion, anxiety, patching and waiting for the Next Big One. The list of high profile vulnerabilities in widely used software packages and platforms continues to rise. A recent phenomenon has researchers borrowing from the National Hurricane Center’s tradition, to introduce a vulnerability with a formal name. Similar to hurricanes and weather scientists, security researchers, analysts, and practitioners observe and track vulnerabilities as more details unfold and the true extent of the risk (and subsequent damage) is known.
Take for example the Android vulnerability released at the beginning of August, 20151. This vulnerability, named “Stagefright” after its eponymous application, can lead to remote code execution (RCE) through several vectors including MMS, Email, HTTP, Media applications, Bluetooth, and more. These factors coupled with the fact that at its release there were no approved patches available for upwards of 95% of the world’s mobile Android footprint means the vulnerability is serious — especially to any organization with a significant Android population.
An introduction to multi-level caching.
Since a content delivery network (CDN) is essentially a cache, you might be tempted not to make use of the cache in the browser, to avoid complexity. However, each cache has its own advantages that the other does not provide. In this post I will explain what the advantages of each are, and how to combine the two for the most optimal performance of your website.
Why use both?
While CDNs do a good job of delivering assets very quickly, they can’t do much about users who are out in the boonies and barely have a single bar of reception on their phone. As a matter of fact, in the US, the 95th percentile for the round trip time (RTT) to all CDNs is well in excess of 200 milliseconds, according to Cedexis reports. That means at least 5% of your users, if not more, are likely to have a slow experience with your website or application. For reference, the 50th percentile, or median, RTT is around 45 milliseconds.
So why bother using a CDN at all? Why not just rely on the browser cache?
Outsourcing your DNS is not a magic bullet.
There is frequently a tendency toward letting one’s guard down when it comes to threats to your IT systems. Absent an immediate “hair-on-fire” situation, we may relax and assume all is well. Yet malicious activity such as hacking, phishing, malware, and DDoS attacks never stop accelerating in terms of frequency and intensity.
So it’s important to have a “Plan B” DNS solution in place and ready before a crisis hits. That way, even if you’re taken off guard, you still have a backup plan and can respond appropriately.
DNS is one of those things nobody really thinks about, until it stops working. The first time easyDNS went off the air on April 15, 2003, it induced a type of existential crisis in me. That summer, after meditating intensely on the situation, I came away with the conclusion that the centralized managed DNS model, as we understood it then, was doomed.
My response at the time was a proposal to pivot to a DNS appliance with decentralized deployments, but centralized monitoring and management. That concept was promptly shot down my co-founders and we’ve kept on with the centralized, hosted DNS model to this day.
The core problem is this: there are many reasons to elect to outsource your DNS to a managed DNS provider. Those reasons include:
How much do you need to know?
I expected that CSSDevConf would be primarily a show about front-end work, focused on work in clients and specifically in browsers. I kept running into conversations, though, about the challenges of moving between the front and back end, the client and the server side. Some were from developers suddenly told that they had to become “full-stack developers” covering the whole spectrum, while others were from front-end engineers suddenly finding a flood of back-end developers tinkering with the client side of their applications. “Full-stack” isn’t always a cheerful story.
In the early days of the Web, “full-stack” was normal. While there were certainly people who focused on running web servers or designing sites as beautiful as the technology would allow, there were lots of webmasters who knew how to design a site, write HTML, manage a server, and maybe write some CGI code for early applications.
Even as the bust set in, specialization remained the trend because Web projects — especially on the server side — had grown far more complicated. They weren’t just a server and a few scripts, but a complete stack, including templates, logic, and usually a database. Whether you preferred the LAMP stack, a Microsoft ASP stack, or perhaps Java servlets and JSP, the server side rapidly became its own complex arena. Intranet development in particular exploded as a way to build server-based applications that could (cheaply) connect data sources to users on multiple platforms. Writing web apps was faster and cheaper than writing desktop apps, with more tolerance for platform variation.
Range, power consumption, scalability, and bandwidth dominate technology decisions.
Three types of networking topologies are utilized in the Internet-of-Things: point-to-point, star, and mesh networking. To provide a way to explore the attributes and capabilities of each of these topologies, we defined a hypothetical (but realistic) application in the building monitoring and energy management space and methodically defined its networking requirements.
Let’s pull it all together to make a network selection for our building monitoring application. As described previously, the application will monitor, analyze, and optimize energy usage throughout the user’s properties. To accomplish this, monitoring and control points need to be deployed throughout each building, including occupancy and temperature sensors. Sensor data will be aggregated back to a central building automation panel located in each building. A continuous collection of data will provide a higher resolution of temperature and occupancy information, thus rendering better insight into HVAC performance and building utilization patterns. Comparison of energy utilization throughout the portfolio of properties allows lower performing buildings to be flagged.
A suitable network topology for building automation.
Editor’s note: this article is part of a series exploring the role of networking in the Internet of Things.
Today we are going to consider the attributes of wireless mesh networking, particularly in the context of our building monitoring and energy application.
A host of new mesh networking technologies came upon the scene in the mid-2000s through start-up ventures such as Millennial Net, Ember, Dust Networks, and others. The mesh network topology is ideally suited to provide broad area coverage for low-power, low-data rate applications found in application areas like industrial automation, home and commercial building automation, medical monitoring, and agriculture.