The Joys of Static Memory JavaScript

Velocity 2013 Speaker Series

You wake up one morning to discover your team has gotten a dreaded alert: your web application is performing badly. You dig through your code, but don’t see anything that stands out, until you open up Chrome’s memory performance tools, and see this:


One of your co-workers chuckles, because they realize that you’ve got a memory-related performance problem.

In the memory graph above, this saw-tooth pattern is very telling about a potentially critical performance problem. As your memory usage grows, you’ll see the chart area also grow in the timeline capture. When the chart dips suddenly, it’s an instance when the garbage collector has run, and cleaned up your referenced memory objects.


In a graph like this, you can see that there are lots of garbage collection events occurring, which can be harmful to your web app’s performance.

It’s useful to know that a garbage collection event will occur once a set of heuristics determines that there are enough inactive objects that a pulse would be beneficial. As such, the key to reducing the amount of time that the garbage collection takes from your application lies in eliminating as many cases of excessive object creation and release as you can.

[contextly_sidebar id=”19cb45caddbf8ad83de9e47cb7a5ad53″]

Static memory JavaScript is a technique that involves pre-allocating, at the start of your app, all the memory that will be needed for its lifetime, and managing that memory during execution as objects are no longer needed. We can approach this goal in a few simple steps:

    Instrument your application to determine what the maximum number of required live memory objects (per type) are for a range of usage scenarios
    Re-implement your code to pre-allocate that maximum amount, and then manually fetch/release them rather than going to main memory.

The most effective way to manage the lifetime of your objects is by using a data structure known as an object pool. In basic terms, object pooling is the process of retaining a set of unused objects that share a type. When you need a new object for your code, rather than allocating a new one from the system memory heap, you instead recycle one of the unused objects from the pool. Once the external code is done with the object, rather than releasing it to main memory, it is returned to the pool. Because the object is never dereferenced (aka deleted) from code, it won’t be garbage collected. Utilizing object pools puts control of memory back in the hands of the programmer, reducing the influence of the garbage collector on performance.

Since there’s a heterogenous set of object types that an application maintains, proper usage of object pools requires you to have one pool per type that experiences high-churn during your application’s runtime:

var newEntity = gEntityObjectPool.allocate();
newEntity.pos = {x: 215, y: 88};

//..... do some stuff with the object that we need to do; //free the object when we’re done
newEntity = null; //free this object reference

For the large majority of applications, you’ll eventually hit some leveling off in terms of needing to allocate new objects. Over multiple runs of your application, you should be able to get a better feel for what this upper limit is, and you’ll be able to pre-allocate that number of objects at the start of your application.

By combining pre-allocation and object pooling, you can regain control of memory management in your web app, regaining critical milliseconds that would otherwise be lost to garbage collection events.

In my talk at Velocity New York this October, I’ll dive deeper into these topics of static JavaScript performance, and show how the same techniques can be applied to the rest of your web application as well. If you’re looking for a quick fix, make sure you stop by for my Lighting Demo on Memory Profiling with Chrome Dev Tools.

This is one of a series of posts related to the upcoming Velocity conference in New York City (Oct 14-16). We hope to see you there.

tags: , , , , ,