Skip to main content

This site requires you to update your browser. Your browsing experience maybe affected by not having the most up to date version.

Russell Michell describes himself in the context of work as an absolute web all-rounder, as comfortable on the command-line as he is with the front-end. Originally from Cambridge, UK he has performed his craft for startups, academia and agencies in the UK, Australia and in the years since his first OE to New Zealand back in '99. Russell is a web developer at SilverStripe in Wellington, having joined the company in 2011.

Usually when web developers read the words "on", "fly" and "the" (usually in a different order) various tech-solutions to as-yet-undefined problems present themselves, perhaps involving near real time data processing, AJAX UI interactions or some kind of HTML5 web-socket or Node application.

But not quite in this case.

My position is that the best place to apply the thinking and techniques orientated around application and site performance optimisation is "on-the-fly" during development. The opposite of this might be understood in a more traditional optimization project; a poorly performing app or site undergoes a major overhaul, performance is evaluated and tools are deployed, code is refactored, cut down or removed, tests are run and performance then re-evaluated - rinse and repeat as time/budget allows.

As a company or an individual, possession of a legacy project requiring such a process is not necessarily such a crime. Given the rapid pace of browser development of recent years and the ever-increasing range of technologies able to be deployed to public-facing projects, it is wholly understandable - if not forgivable - that feature creep and some bloat occur without due attention being paid to performance in terms of response-times and other performance metrics. Unfortunately it is precisely this lack of attention that incurs the technical debt which ensures that the single optimisation project becomes necessary in the first place.

I have worked on some very similar optimisation projects in the past and it's that experience that I draw upon as I develop features in the stuff I work on today. I can't help but see potential bottlenecks in logic or the way some feature or other is assembled.

Arguably the first point of call in any such optimization project is to reduce HTTP requests back to the server. The ease with which AJAX interactions can be coded with modern JavaScript libraries means it's all too easy to overlook the server load they generate. Just break out your browser debugger in network mode, and watch all those XHR requests fly past. The same goes for server-side redirects which still do little to impress non-sighted web users and when under load, the webservers themselves.

So as I code I'm already thinking about whether the functionality afforded by particular AJAX controller request for example might be combined with others to kill several birds with one projectile.

Depending on a team's development methodology, developers may argue that the full feature-set needs to be built first and refined afterwards as time and budget allow. While I agree that this sounds logical; to deal to everything in one hit, it does seem unfortunate that it is also this thinking that can lead to this refining step never actually being taken, and the public having 1Mb+ homepages foisted upon them. 

With HTTP I'm able to use standard browser-based debugging tools to monitor activity. However, if a feature or application relies upon disk i/o and that feature needs to function well under load, I'll likely want to look at reducing or precluding as much of this activity as possible given that writing to a disk platter is much slower than writing to memory. Recalling an old Facebook engineering blog entry, the author described the reduction of what sounded like an initially minuscule disk i/o time, to an even tinier number. Given the sheer number of requests each node in the Facebook CDN likely receives, these minute optimizations made sense when scaled-up.

Note: I'm not really advocating the use of i/o analysis tools in the application development stage (though they would do no harm), rather I'm suggesting to remain aware of an application or feature's reads and writes, and how you might code them differently.

Many development frameworks will provide developers APIs and functionality to combine and automatically minify assets requested on a page by page basis, and it behooves us as developers to be aware of these and make use of them. This is especially relevant given modern requirements for ever more complex applications with increasingly numerous CSS and JavaScript files hanging off each page request. In terms of the "on-the-fly" approach, I try not to assume that my chosen framework's automatic use of this kind of functionality precludes my having to think about what I'm writing, how I'm writing it and the context(s) under which these assets might be requested.

Less is becoming more and more in this world, and this is as true for web-development as it is for anything else, from energy and carbon footprints to localized food production and transportation.

Takeaways: 

  1. Think about performance while you work as well as afterwards. Not everyone's ISP offers the same performance as your office's T1 connection
  2. Start by monitoring how your application or feature performs using some of the free tools listed below
  3. Reduce HTTP requests back to the server (AJAX, redirects etc)
  4. Study all AJAX requests, especially those that are run just for an initial page load (ie: content that could be pre-loaded as HTML by the server before the page is served).
  5. Fish and chips for tea

Useful links:

  1. Blog post and WDCNZ optimization talk
  2. Steve Souders, author and speaker on all optimization-related topics: @souders
  3. Google PageSpeed browser addon for Firefox and Chrome
  4. Yahoo Yslow browser addon
  5. Facebook on Disk I/O