Russell Michell describes himself in the context of work as an absolute web all-rounder, as comfortable on the command-line as he is with the front-end. Originally from Cambridge, UK he has performed his craft for startups, academia and agencies in the UK, Australia and in the years since his first OE to New Zealand back in '99. Russell is a web developer at SilverStripe in Wellington, having joined the company in 2011.
Usually when web developers read the words "on", "fly" and "the" (usually in a different order) various tech-solutions to as-yet-undefined problems present themselves, perhaps involving near real time data processing, AJAX UI interactions or some kind of HTML5 web-socket or Node application.
But not quite in this case.
My position is that the best place to apply the thinking and techniques orientated around application and site performance optimisation is "on-the-fly" during development. The opposite of this might be understood in a more traditional optimization project; a poorly performing app or site undergoes a major overhaul, performance is evaluated and tools are deployed, code is refactored, cut down or removed, tests are run and performance then re-evaluated - rinse and repeat as time/budget allows.
As a company or an individual, possession of a legacy project requiring such a process is not necessarily such a crime. Given the rapid pace of browser development of recent years and the ever-increasing range of technologies able to be deployed to public-facing projects, it is wholly understandable - if not forgivable - that feature creep and some bloat occur without due attention being paid to performance in terms of response-times and other performance metrics. Unfortunately it is precisely this lack of attention that incurs the technical debt which ensures that the single optimisation project becomes necessary in the first place.
I have worked on some very similar optimisation projects in the past and it's that experience that I draw upon as I develop features in the stuff I work on today. I can't help but see potential bottlenecks in logic or the way some feature or other is assembled.
So as I code I'm already thinking about whether the functionality afforded by particular AJAX controller request for example might be combined with others to kill several birds with one projectile.
Depending on a team's development methodology, developers may argue that the full feature-set needs to be built first and refined afterwards as time and budget allow. While I agree that this sounds logical; to deal to everything in one hit, it does seem unfortunate that it is also this thinking that can lead to this refining step never actually being taken, and the public having 1Mb+ homepages foisted upon them.
With HTTP I'm able to use standard browser-based debugging tools to monitor activity. However, if a feature or application relies upon disk i/o and that feature needs to function well under load, I'll likely want to look at reducing or precluding as much of this activity as possible given that writing to a disk platter is much slower than writing to memory. Recalling an old Facebook engineering blog entry, the author described the reduction of what sounded like an initially minuscule disk i/o time, to an even tinier number. Given the sheer number of requests each node in the Facebook CDN likely receives, these minute optimizations made sense when scaled-up.
Note: I'm not really advocating the use of i/o analysis tools in the application development stage (though they would do no harm), rather I'm suggesting to remain aware of an application or feature's reads and writes, and how you might code them differently.
Less is becoming more and more in this world, and this is as true for web-development as it is for anything else, from energy and carbon footprints to localized food production and transportation.