Skip to main content

This site requires you to update your browser. Your browsing experience maybe affected by not having the most up to date version.

We're retiring the forums!

The SilverStripe forums have passed their heyday. They'll stick around, but will be read only. We'd encourage you to get involved in the community via the following channels instead:

General Questions /

General questions about getting started with SilverStripe that don't fit in any of the categories above.

Moderators: martimiz, Sean, Ed, biapar, Willr, Ingo, swaiba

Finding the cause of a sluggish-loading SS website

Go to End

6 Posts   2276 Views


Community Member, 50 Posts

14 September 2010 at 9:20pm

Edited: 14/09/2010 9:26pm

We finished a large SilverStripe-fueled website for a client just before the summer set in and although the client is happy with the website itself, as time has progressed they have become more vocal about the one major complaint they have: the website is often extremely slow with page loads.

Before getting into it in more detail, the address of the website is:

This way, those reading this can give it a try themselves. Not every page request is slow loading, but [especially the first load with an empty browser-cache / CTRL+F5] every few pages the problem raises its ugly head once again ;-)
After enabling the NET tab in Firebug and clicking / refreshing until encountering a slow-loading page, it became apparent that the bottleneck seems to be the first GET request of the page, which is the page itself [GET]. This request sometimes takes about 4 seconds [whereas normally it should clock in at under a second] but I've also seen it last for over 10 seconds! After the request is finished, all remaining GET requests complete almost instantly.

Prior to starting this thread I've been digging through the archives of the boards using the Search function and have found numerous threads devoted to more or less the same problem [slow-loading websites] but I'm not sure if my case is identical as I've tried a number of the suggested solutions, to no avail.

Here is a list of some things I have tried:

  • Installing "SilverStripe Dawn" on the host server to monitor the website; unfortunately, we haven't succeeded in this due to not being able to run the install script as root in an SSH shell and our host won't do this for us either as it's a shared server [we're still awaiting a response from the SS Helpdesk on any alternative methods for installation]
  • Enabling the "silverstripe-cache" directory which is now functioning properly
  • Making sure the "assets" directory is CHMODed 777
  • Adding some lines to the .htaccess file, such as the ExpiresActive/ExpiresDefault lines, "Header unset ETag" and "Header unset Last-Modified"

We have also tried some of the debugging URL variables listed on this page, but haven't been able to draw any conclusions from them. Unless the peak memory usage being 27,5 million bytes is too extreme? [I have no clue ;-)]

None of these efforts have helped improve the speed of the website. We have other [simpler] SilverStripe websites running on the same host which don't share this problem, so the host can't be the cause of it. This LPB website is notably larger though, as it has the following modules installed: DataObjectManager, EventCalendar, GoogleSitemaps, SWFUpload, UserForms and ReCaptcha. We have also developed a few modules of our own, one extending the Members class and a few modules which use DataObjects to store records like user-generated news articles and discussions, and comments made to those.

One thing we have been wondering: it is possible that the initial GET request is sometimes slowed down so much due to our Page.php being filled with "too many" functions? The file itself is 20 KB, which isn't insanely large. But due to it being our first big SilverStripe project and some hard deadlines, whenever we couldn't get functions from child / sibling classes to work in a different class, we moved them into Page.php so we could call them through use of "parent::functionName()". Is it worth it to try and move all of these functions into their rightful classes and see if that speeds up the load-time of the website? Or won't this make any difference?

Any help would be greatly appreciated! We plan to start using SilverStripe as our "CMS for large websites" of choice from now on, so it would be nice to know how we can avoid this problem in the future.


Community Member, 473 Posts

14 September 2010 at 10:35pm

You've discovered the main problem of using a shared host, they can't handle large sites. I was in a similar situation with my personal site about a year ago (had around 200 pages, the home page would talk far too long to load). This is because the server wont give you enough resources to handle generating the page in a reasonable amount of time.

The best thing here is to move to your own server, such as a VPS, though potentially a dedicated if the site grows and is popular.

Something you can do on a shared host is use StaticPublisher to generate HTML files which have a lot smaller load on the server, or get your host to install an opcode cache, though these will only help in the interim. If you are planning on hosting large, dynamically generated sites you will need to move off of shared hosting and onto something that can provide the extra grunt you'll need.


Community Member, 271 Posts

14 September 2010 at 11:23pm

Some things that helped me to speed up the website:

Add ob_start("ob_gzhandler"); to mysite/_config.php .

Use a caching system like APC .

Use ?debug_profile to see what parts of the page use a lot of memory and try to put that parts in partial caching blocks:

I found that using a lot of widgets and large complex DataObjectSets can consume a lot of memory and use partial caching can give great performance gains.

Hope this helps.


Community Member, 50 Posts

15 September 2010 at 12:54am

Edited: 15/09/2010 12:57am

@simon_w: You could have a point with the host, except the other SilverStripe websites that we run are on that same host and they are performing fine. I've read through the page you provided on "StaticPublisher" - thank you for that! - and that seems like it could help a bit. However, over half of the pages on the website are actually DataObjects which are served as pages through URL rewriting, so that would complicate matters seeing as those "pages" aren't part of the SiteTree, but are DataObjects belonging to pages on the SiteTree. Also, setting up a cronjob isn't the most ideal solution. But perhaps I'll implement the StaticPublisher for the regular SiteTree pages.

@Martijn: Thanks for the tips! I'll give "ob_gzhandler" a shot as that one is simple enough. I'll have a look at partial caching in a minute as that sounds interesting. The debug_profile URL variable is useful! What I learned from that was a Twitter function [which loads in a Twitter profile's RSS feed through DOMDocument()] consumed 93% of the load time of the page [8.3 seconds]. So I emptied it and set it to return false. This results in the largest listed time now being 0.33 seconds; you'd think that would've solved the problem, but despite this output from the profiler, the GET request is still clocking in at an average of 5 to 10 seconds...

BTW: I gather that the time listed in the profiler is the total time for all of the calls to that object and not that the time is to be multiplied by the calls? ;-)


Community Member, 271 Posts

15 September 2010 at 1:07am

Why not load the twitter feed with jQuery?

I assume that the debug_profile total is the total of that single request for the whole page.


Community Member, 50 Posts

15 September 2010 at 2:33am

Edited: 15/09/2010 2:35am

Loading Twitter through JavaScript is indeed a more logical choice, if I recall correctly we chose for server-side loading of Twitter back during development as it was quick and easy to do and we could easily combine it with the classes we have in SilverStripe.

I've wrapped the Twitter part of the template in a <% cached %><% end_cached %> block and that seems to drastically improve the load time of the page! Occasionally a page will still take a long time to load [but I guess that has to do with the cache expiring], but almost every load now clocks in at around 1 second for the first GET request :)

Seeing as I like the results so much, I've continued to wrap the lists of "recent content" generated by users in the sidebar(s) in "cached" blocks. I assume this is okay and isn't overdoing it?

For the data which is stored in our own database, I tried to implement the examples like so: <% cached 'database', LastEdited %>, but have yet to succeed in it. Just putting e.g. "cached 'FunctionName', Created" before the control block of 'FunctionName' doesn't do the trick; but then again, that is probably not the right way to go about it. But the documentation page on partial caching doesn't really show a full-fledged example, only bits and pieces.

I also read on the documentation page that the default expiration time of the cache is 10 minutes. Does anyone know which parameter can be used to alter this (to say, an hour)? Or is it a setting which is located elsewhere in the CMS / a file?