"performance" entries

Better code is cheaper

Implementing software quality standards guarantees measurable results.


Listen to the podcast Better code is cheaper to learn how the Software Improvement Group (SIG) is paving the way for software quality and maintainability.

Software quality is an often-overlooked development parameter, making way for those items that resonate outside of the engineering team – a faster schedule and an on-budget project. Joost Visser, Head of Research at Software Improvement Group (SIG) sat down with me to explain how a focus on quality helps to achieve the fastest possible schedules and lowest possible cost of development and maintenance.

Read more…

Comments: 3

Compliance at Speed

Achieve performance goals in the face of compliance issues.

Download a free copy of Compliance at Speed, an O’Reilly report by Mark Lustig that breaks down the IT issues facing finance, healthcare, and other heavily regulated industries.

Today’s technology should work and perform without issues. When we build systems that work and perform, nobody pays attention; when they’re slow and unstable, everyone notices. This sentiment was no truer than during last year’s Healthcare.gov debacle. Building systems that work and meet our user’s expectations is always the number one priority.

Regardless of who we work for, the challenges of performance and DevOps are universal. There is one constraint larger companies seem to face more often — regulatory compliance. As privacy concerns have become more pervasive, compliance affects all of our companies in one way or another.

Reputation is based on trust. If I’m looking up my credit card balance and I end up seeing someone else’s information, I’ve lost trust in the credit card company, and they’ve lost a customer.

Large banks and health care organizations have been dealing with compliance for years, but every industry has constraints. Online retailers need to meet privacy and security standards. The social networking industry faces regulations specific to consumer protection and the use of customer information. No industry is immune to meeting compliance, as emerging regulations, both domestic and international, create more challenges to achieving performance objectives each year. Any website that uses, stores, or processes personal or payment information must address these challenges.

Read more…


The language and metrics of UX evolve at Velocity 2015

As developers and designers converge, we're seeing an increased focus on the user's perspective.

Editor’s note: The O’Reilly Velocity Conference in Santa Clara was held last week. The event explored the essential trends driving web operations and performance forward. In the post that follows, Mark Zeman digs into recent changes he’s observed in one aspect of Velocity: the role, language, and metrics surrounding user experience.

I’ve attended four O’Reilly Velocity conferences over the last year, and I was struck by a notable shift in the conversations at Velocity in Santa Clara, Calif. Many speakers and attendees have started to change their language and describe the experience of their websites and apps from the user’s perspective.

The balance has shifted from just talking about how fast or reliable a particular system is to the overall experience a user has when they interact with and experience a product. Many people are now looking at themselves from the outside in and developing more empathy for their users. The words “user” and “user experience” were mentioned again and again by speakers.

Here are recent talks from Velocity and other events that highlight this shift to UX concerns. Read more…

Comment: 1

How we got to the HTTP/2 and HPACK RFCs

A brief history of SPDY and HTTP/2.

Download HTTP/2.

Download HTTP/2.

Download a free copy of HTTP/2. This post is an excerpt by Ilya Grigorik from High Performance Browser Networking, the essential guide to networking and web performance for web developers.

SPDY was an experimental protocol, developed at Google and announced in mid-2009, whose primary goal was to try to reduce the load latency of web pages by addressing some of the well-known performance limitations of HTTP/1.1. Specifically, the outlined project goals were set as follows:

  • Target a 50% reduction in page load time (PLT).
  • Avoid the need for any changes to content by website authors.
  • Minimize deployment complexity, avoid changes in network infrastructure.
  • Develop this new protocol in partnership with the open-source community.
  • Gather real performance data to (in)validate the experimental protocol.

To achieve the 50% PLT improvement, SPDY aimed to make more efficient use of the underlying TCP connection by introducing a new binary framing layer to enable request and response multiplexing, prioritization, and header compression.

Not long after the initial announcement, Mike Belshe and Roberto Peon, both software engineers at Google, shared their first results, documentation, and source code for the experimental implementation of the new SPDY protocol:

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance—pages loaded up to 55% faster.

— A 2x Faster Web Chromium Blog

Fast-forward to 2012 and the new experimental protocol was supported in Chrome, Firefox, and Opera, and a rapidly growing number of sites, both large (e.g. Google, Twitter, Facebook) and small, were deploying SPDY within their infrastructure. In effect, SPDY was on track to become a de facto standard through growing industry adoption.

Read more…


Ghosts in the machines

The secret to successful infrastructure automation is people.


“The trouble with automation is that it often gives us what we don’t need at the cost of what we do.” —Nicholas Carr, The Glass Cage: Automation and Us

Virtualization and cloud hosting platforms have pervasively decoupled infrastructure from its underlying hardware over the past decade. This has led to a massive shift towards what many are calling dynamic infrastructure, wherein infrastructure and the tools and services used to manage it are treated as code, allowing operations teams to adopt software approaches that have dramatically changed how they operate. But with automation comes a great deal of fear, uncertainty and doubt.

Common (mis)perceptions of automation tend to pop up at the extreme ends: It will either liberate your people to never have to worry about mundane tasks and details, running intelligently in the background, or it will make SysAdmins irrelevant and eventually replace all IT jobs (and beyond). Of course, the truth is very much somewhere in between, and relies on a fundamental rethinking of the relationship between humans and automation.
Read more…

Comment: 1
Four short links: 7 April 2015

Four short links: 7 April 2015

JavaScript Numeric Methods, Misunderstood Statistics, Web Speed, and Sentiment Analysis

  1. NumericJS — numerical methods in JavaScript.
  2. P Values are not Error Probabilities (PDF) — In particular, we illustrate how this mixing of statistical testing methodologies has resulted in widespread confusion over the interpretation of p values (evidential measures) and α levels (measures of error). We demonstrate that this confusion was a problem between the Fisherian and Neyman–Pearson camps, is not uncommon among statisticians, is prevalent in statistics textbooks, and is well nigh universal in the pages of leading (marketing) journals. This mass confusion, in turn, has rendered applications of classical statistical testing all but meaningless among applied researchers.
  3. Breaking the 1000ms Time to Glass Mobile Barrier (YouTube) —
    See also slides. Stay under 250 ms to feel “fast.” Stay under 1000 ms to keep users’ attention.
  4. Modern Methods for Sentiment AnalysisRecently, Google developed a method called Word2Vec that captures the context of words, while at the same time reducing the size of the data. Gentle introduction, with code.
Comment: 1
Four short links: 24 March 2015

Four short links: 24 March 2015

Tricorder Prototype, Web Performance, 3D Licensing, and Network Simulation

  1. Tricorder Prototypecollar+earpiece, base station, diagnostic stick (lab tests for diabetes, pneumonia, tb, etc), and scanning wand (examine lesions, otoscope for ears, even spirometer). (via Slashdot)
  2. Souders Joins SpeedcurveDuring these engagements, I’ve seen that many of these companies don’t have the necessary tools to help them identify how performance is impacting (hurting) the user experience on their websites. There is even less information about ways to improve performance. The standard performance metric is page load time, but there’s often no correlation between page load time and the user’s experience. We need to shift from network-based metrics to user experience metrics that focus on rendering and when content becomes available. That’s exactly what Mark is doing at SpeedCurve, and why I’m excited to join him.
  3. 3 Steps for Licensing Your 3D-Printed Stuff (PDF) — this paper is not actually about choosing the right license for your 3D printable stuff (sorry about that). Instead, this paper aims to flesh out a copyright analysis for both physical objects and for the digital files that represent them, allowing you to really understand what parts of your 3D object you are—and are not—licensing. Understanding what you are licensing is key to choosing the right license. Simply put, this is because you cannot license what you do not legally control in the first place. There is no point in considering licenses that ultimately do not have the power to address whatever behavior you’re aiming to control. However, once you understand what it is you want to license, choosing the license itself is fairly straightforward. (via BoingBoing)
  4. Augmented Traffic Control — Facebook’s tool for simulating degraded network conditions.
Four short links: 16 December 2014

Four short links: 16 December 2014

Memory Management, Stream Processing, Robot's Google, and Emotive Words

  1. Effectively Managing Memory at Gmail Scale — how they gathered data, how Javascript memory management works, and what they did to nail down leaks.
  2. tigonan open-source, real-time, low-latency, high-throughput stream processing framework.
  3. Robo Brain — machine knowledge of the real world for robots. (via MIT Technology Review)
  4. The Structure and Interpretation of the Computer Science Curriculum — convincing argument for teaching intro to programming with Scheme, but not using the classic text SICP.

Update: the original fourth link to Depeche Mood led only to a README on GitHub; we’ve replaced it with a new link.

Comments: 5
Four short links: 4 December 2014

Four short links: 4 December 2014

Click to Captcha, Managing Hackers, Easy Ordering, and Inside Ad Auctions

  1. One Click Captcha (Wired) — Google’s new Captcha tech is just a checkbox: “I am not a robot”. Instead of depending upon the traditional distorted word test, Google’s “reCaptcha” examines cues every user unwittingly provides: IP addresses and cookies provide evidence that the user is the same friendly human Google remembers from elsewhere on the Web. And Shet says even the tiny movements a user’s mouse makes as it hovers and approaches a checkbox can help reveal an automated bot.
  2. The Responsive Enterprise: Embracing the Hacker Way (ACM) — Letting developers wander around without clear goals in the vastness of the software universe of all computable functions is one of the major reasons why projects fail, not because of lack of process or planning. I like all of this, although at times it can be a little like what I imagine it would be like if Cory Doctorow wrote a management textbook. (via Greg Linden)
  3. Pizza Hut Tests Ordering via Eye-TrackingThe digital menu shows diners a canvas of 20 toppings and builds their pizza, from one of 4,896 combinations, based on which toppings they looked at longest.
  4. How Browsers Get to Know You in Milliseconds (Andy Oram) — breaks down info exchange, data exchange, timing, even business relationships for ad auctions. Augment understanding of the user from third-party data (10 milliseconds). These third parties are the companies that accumulate information about our purchasing habits. The time allowed for them to return data is so short that they often can’t spare time for network transmission, and instead co-locate at the AppNexus server site. In fact, according to Magnusson, the founders of AppNexus created a cloud server before opening their exchange.

Signals from Velocity Europe 2014

From design processes to postmortem complexity, here are key insights from Velocity Europe 2014.

Practitioners and experts from the web operations and performance worlds came together in Barcelona, Spain for Velocity Europe 2014. We’ve gathered highlights and insights from the event below.

Managing performance is like herding cats

Aaron Rudger, senior product marketing manager at Keynote Systems, says bridging the communication gap between IT and the marketing and business sectors is a bit like herding cats. Successful communication requires a narrative that discusses performance in the context of key business metrics, such as user engagement, abandonment, impression count, and revenue.

Read more…

Comment: 1