One important thing that shapes the overall single-page application performance is instrumentation of the application code. The most obvious use-case is for analyzing code coverage, particularly when running unit tests and functional tests. Code that never gets executed during the testing process is an accident waiting to happen. While it is unreasonable to have 100% coverage, having no coverage data at all does not provide a lot of confidence. These days, we are seeing easy-to-use coverage tools such as Istanbul and Blanket.js become widespread, and they work seamlessly with popular test frameworks such as Jasmine, Mocha, Karma, and many others.
Instrumented code can be leveraged to perform another type of analysis: run-time scalability. Performance is often measured by the elapsed time, e.g. how long it takes to perform a certain operation. This stopwatch approach only tells half of the story. For example, testing the performance of sorting 10 contacts in 10 ms in an address book application doesn’t tell anything about the complexity of that address book. How will it cope with 100 contacts? 1,000 contacts? Since it is not always practical to carry out a formal analysis on the application code to figure out its complexity, the workaround is to figure out the empirical run-time complexity. In this example, it can be done by instrumenting and monitoring a particular part of the sorting implementation—probably the “swap two entries” function—and watch the behavior with different input sizes.
An optimal application development workflow does not treat the application performance as an afterthought. Performance is a feature and therefore its metrics must be an integral part of the workflow. This is where a multi-layer defense approach can help a lot. Even if the QA team can perform a series of thorough and intensive tests, a simple smoke test (possibly via a command-line headless web automation such as PhantomJS) can reveal any mistake as early as possible. After all, what is the purpose of hammering the address book application if its sorting feature suddenly becomes unbearably slow?
Defensive workflow can start from the primary tools of the developer: the code editor/IDE and revision control system. Ideally, any programmer’s editing tool must inform the user if there are possible problems in the code, from a simple syntax validation check to other serious problems such as a global variable leak. Modern version control such as Git supports the concept of a pre-commit hook, namely running a specified script before the change is checked in. If the script invokes some tests, upon any failure it can block the check in. This proactive measure will prevent any bad code from sneaking into the code repository.
This is one of a series of posts related to the upcoming Velocity conference in Santa Clara, CA (June 18-20). We’ll be highlighting speakers in a variety of ways, from video and email interviews to posts by the speakers themselves.