Improving SilverStripe performance

Posted by Simon Welsh on 4 February 2014

It's well known that out of the box Content Management Systems (CMS) are often not the most high performance of systems. A vanilla installation of SilverStripe is no exception. Luckily, there are some useful tactics you can use to improve performance of your SilverStripe websites. This involves some built in features of SilverStripe itself, along with some infrastructural considerations. It all depends on your customer needs and the level of performance you require.

Move off shared hosting

Make your SilverStripe website fasterVery few shared hosting providers are able to cope with the requirements of a high performing SilverStripe site (or any standard CMS usually). This is because they can have hundreds if not thousands of other websites all running on the same server. This leaves very few resources and will affect your site performance once the traffic builds up. Simply moving to a Virtual Private Server (VPS) will greatly benefit your site. Personally, I tend to use RimuHosting (for New Zealand sites) though pretty much any decent host will do.

Opcode cache

Once you're off your shared host, the first thing to do after getting set up is to install an Opcode cache. For those that don't know, when your PHP code is run each file used is parsed and turned into a bunch of opcodes before actually being run. An opcode cache keeps these opcodes around so the files only need to be re-parsed when they change.

Personally, I'm using xcache. PHP 5.5 has a built in cache that I've heard mixed reports about and APC hasn't had a stable release since PHP 5.3 but the betas are fairly stable on PHP 5.4. They each have their different strengths and weakness, so pick one and use it.

For most sites, SilverStripe will now be running a lot faster and you can stop there. However, there are also ways to get things going even faster depending on your circumstances.

Profiling

Profiling is useful and is the most important things for continued improvements. Start profiling a small portion of your website requests. One option to do this is installing  XHProf and set up your _ss_environment.php  to randomly enable it on a percentage of requests. Analyse the resulting information to see what's slowing down requests so you know where to concentrate your efforts on improving your site performance.

Non-blocking sessions

A surprising result from profiling requests on one of my sites was that the PHP session_start() function was taking up 60-90% of the execution time. This came down to the way PHP implements sessions. By default, all sessions are blocking. This means if one request comes in using a particular session it acquires a lock and then any other request using that session has to wait for the first one to finish. Websites that utilise AJAX requests initially load faster and may subsequently pull in data which takes a bit longer to prepare in the background can be an issue. These sessions can end up blocking each other, so only one at a time is processed. Workarounds for this are to  implement a custom session handler that doesn't acquire locks on opening. This lets the server process multiple AJAX requests concurrently. This does have a downside however, you can't store things in sessions. This is because the last request to write the session will overwrite anything from the requests running at the same time as it. So ensure you are aware of this before you decide to use this approach, it may not be appropriate in your specific situation.

Moving the database

The most consistent method that I find using up a lot of request time was mysqli::query(). This points to the database being the bottleneck. For the vast majority of sites this is usually the case.

While you can spend a lot of time tuning the queries, indexes and database, the easiest way to get a very noticeable decrease in time spent in the database is to move it to its own server and then load the entire thing into RAM. This was the single most noticeable change to performance I found. Before the move, the load average on my VPS was generally between 0.1 and 0.3. After the move, it takes some effort to get the average on either server above 0.03 with it most commonly being at 0.01.

Caching

SilverStripe supports a few different types of caching. The options you can use depend on your site, but any you can use will be beneficial to improving your website performance.

Partial caching

For the marketing style sites, the content seldom updates, yet just loading that pages often takes a massive number of database queries. This is a prime use case for partial caching.

Wrapping a <% cached %> block around the entire Layout template and using the max LastEdited value of any SiteTree object as its key is a useful approach. Just this simple two line addition dropped the number of queries needed to render the home page of one of my websites from 63 down to 8.

SS_Cache

SilverStripe also provides a mechanism for caching things inside PHP. This is done through the SS_Cache class. The API docs for this class are reasonably detailed and easy to follow. The main case for using SS_Cache is when you're generating some content, or requesting it from a third party and that it doesn't matter if it's not generated every single time. You then store the content in the cache and only generate/request it if it isn't in the cache (this could be due to it expiring, you invalidating it or it not having been cached yet).

Static publishing

If your site has very little dynamic content, you're likely to benefit from static publishing. Static publishing works by generating all your pages as pure HTML and serving that instead of going through SilverStripe's Controller and Model layers to serve your content. If your website is mostly informational you can get a massive increase in the amount of requests you can handle as well as dramatically reducing response times.

Assets

While not SilverStripe specific, you can help speed up render times for your return visitors by adding cache headers to your assets (images, javascript and css). These headers tell the browser to just use the version they've already downloaded and cached rather than making a request to the server to see if the file has changed.

If you're using Apache, adding something like the following to your .htaccess file will usually be enough.

ExpiresActive On
ExpiresByType image/gif A31556926
ExpiresByType image/jpeg A31556926
ExpiresByType text/css A31556926
ExpiresByType image/png A31556926
ExpiresByType application/x-javascript A31556926
ExpiresByType application/javascript A31556926
ExpiresByType image/x-icon A31556926
<FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header append Cache-Control public
</FilesMatch>

This adds the Expires and Cache-Control headers to your static assets (css, javascript, images). The expires header tells the browser to keep the cached version around for 31556926 seconds (365.24 days… or 1 year) and the cache-control header means that any cache (including proxies) may cache the resource.

HipHop Virtual Machine (HHVM)

This is a much more involved option and rather on the bleeding edge.

You can switch out PHP for Facebook's HipHop Virtual Machine (HHVM). HHVM is a lot faster than generic PHP, though does have tradeoffs. To use it with SilverStripe you must use the newly released 2.4.0 package or newer. 2.3.x is missing several key features that SilverStripe requires.

On the database side, there is support for PostgreSQL through this extension. For MySQL, there's only ext_mysql and PDO support. MySQLi support is in the works. What this means is that SilverStripe 3.0 and 3.1 (which both use MySQLi) currently can't use HHVM and MySQL. SilverStripe 2.4 (which uses ext_mysql) works fine and it is likely that SilverStripe 3.2 may include PDO support.

The XML parsing is also lacking, which means you're unlikely to be able to use short codes.

If those don't bother you, and you've got the time to go through and test your site as well as set up HHVM, then go for it. The Facebook devs in the #hhvm IRC channel on Freenode are fairly responsive and can help you if you get stuck.

Summing up

This is just a handful of ways to speedup a SilverStripe based website. Of course there are others, especially as you need to scale beyond one server which shall be covered in future blog posts. You now have several tactics you can use either stand-alone or as a combination to help you run your SilverStripe in a more demanding performance environment. Some are tried and tested such as moving to a VPS or caching, while others look to the future of web infrastructure. HHVM being a technology to keep a close eye one over the coming year. Remember, it is never purely down to the CMS to determine performance and requires a wider look at the greater whole of your web real estate.

Simon works for PocketRent, a SaaS product built on top of SilverStripe. As part of his duties, he is responsible for the responsiveness and speed of the site and its many background process. Simon can be followed on App.net and Twitter as well as generally being found in the SilverStripe IRC Channel.


Post your comment

Note: Comments are moderated and won't show until they are approved

Comments

  • This is still general websites' performance improving tutorial (except the Cache section).

    Performance of SilverStripe v3.x is far below the v2 versions and it's too slow and very slugish.

    The conclusion: v3.x needs core changes in order to improve performance without improving environment.

    Posted by Lone Shooter, 2 months ago @LoneShooterCom

  • Some other places to get VPS's could be Amazon Web Services, Rackspace Cloud, Media Temple.

    Any other recommendations?

    Posted by Cam Findlay, 2 months ago @cameronfindlay

  • I've had to do some tuning of sites in the last year, one of which I was asked to help on was just (ahem) awful, one page for example incurring 70,000 SQL queries, a number that increased as the site got larger in terms of users. It also didn't help with that site that the database was in a different Amazon zone due to a misconfiguration :)

    Reducing the number of database queries makes a huge difference to the performance of a site. In the (unnamed) project above I ended up writing a method to generate cache keys for all necessary parts of the page in a single SQL query, some of it rather complex. I realised when doing this that a separate module might be an idea.

    As such this is an attempt at this scenario, https://github.com/gordonbanderson/weboftalent-cachekey-helper , and I have a proof of concept working locally. It does not currently deal with case specific scenarios as of yet, but centralises some common most recent LastEdited values such as sibling or child, and provides the option of configuring any arbitrary model into the caching query.

    One other thing to note is that the LastEdited field does not appear to be indexed. Given that the partial caching relies on this, a large site such as the one I was dealing with needs the LastEdited field indexed. Simple module to get around this, https://github.com/gordonbanderson/weboftalent-index-lastedited, but I should probably raise it as a bug.

    Posted by Gordon Anderson, 2 months ago @nontgor

  • What is the technique behind "... load the database into RAM ..."?

    Posted by Robert, 2 months ago

  • first post

    Posted by ss23, 2 months ago @ss2342

  • You improve my performance, you sex panther you.

    Posted by Speed me up!, 2 months ago

RSS feed for comments on this page | RSS feed for all comments

Want to know more about the company that brought you SilverStripe? Then check out SilverStripe.com

Comments on this website? Please give feedback.