Performance is often an afterthought for websites developed on Adobe Experience Manager (AEM). We inherit AEM code bases all the time, and it pains me to see that the basic features to improve a website’s load speed are either incorrectly implemented or not implemented at all. These are some of the quick wins you can work on without having to overhaul your entire site.
Please note that the recommended changes here apply to plain old AEM, and not AEM as a Cloud Service (AEMaaCS). Where applicable, I will call out how certain things are handled with AEMaaCS because some performance improvements are actually baked into the new offering.
1. Use ACS Commons Versioned ClientLibs
The ACS Commons Versioned Clientlibs
feature is a must-have for anyone who cares about the performance of their website. You can read more about it in the docs, but the general idea is to automatically add a version string to the typical clientlib inclusion. For example:
This:
/etc.clientlibs/brandname/clientlibs/clientlib-dependencies.min.css
Becomes:
/etc.clientlibs/brandname/clientlibs/clientlib-dependencies.min.9617f52d00575753ffd45532da6c58f7.css
This allows you to cache the resulting CSS and JS files for a really long time on your Dispatcher, the CDN (if applicable), and the visitor’s browser. The version string is automatically updated when something changes, so as soon as the HTML page which references the clientlib
is requested again from Publish, it will include the new version hash.
Note that for AEMaaCS, versioned clientlibs
are built into the product and enabled by default for dev/stage/prod environments. The ACS Commons plugin is no longer needed.
2. Leverage Sling Dynamic Includes for Uncacheable Content
In a perfect world, your entire site would be 100 percent cacheable on the Dispatcher. That way, your Publish servers only have to do work when the page actually gets published again with some changes.
However, sometimes you may have use cases where a component has to dynamically render and cannot be cached on the Dispatcher. That’s where Sling Dynamic Includes (SDI) come in because they allow you to poke holes in your cache for specific components.
For example, we recently had a scenario where we needed to dynamically create a time-sensitive signature for an S3 upload tool. Rather than not caching the entire page, we opted to just not cache the upload component.
The way this works technically is that SDI adds a Servlet Filter which replaces all the configured components with Server Side Include tags. For example, for Apache this would look something like this:
<!--#include virtual="/content/brand-name/en/jcr:content/carousel.nocache.html" -->
This HTML comment is then part of the cached page on the Dispatcher and for every request that comes in, Apache will make a request to the Publish server to dynamically generate the markup.
The advantage here is that instead of requiring AEM/Sling to regenerate an entire page, it’s just individual components. Especially for large pages with lots of components or heavy processing, that can make a real difference.
One caveat I’ve recently run into with SDI is that, by default, it doesn’t run if there are query parameters in the request URL. However, they just released a new version 3.2.0 that finally addresses this problem by allowing us to disable the query parameter check altogether. Previously, you had to individually list out specific query parameters that you want to ignore but it’s not maintainable.
3. Ignore Most URL Parameters in the Dispatcher
By default, the Dispatcher is configured so that adding a query string (e.g. ?param=value
) bypasses the cache. This means that all that great traffic you’re getting from your email and social campaigns, with parameters like utm_campaign
, utm_source
, etc., is hitting your Publish server directly and not leveraging your cache.
This behavior is controlled by the ignoreUrlParams
setting in your dispatcher.any. For most websites I’ve encountered, the setting should be configured exactly the opposite to Adobe’s default:
/ignoreUrlParams {
/0001 { /glob "*" /type "allow" }
}
The "allow" instead of the default "deny" here ensures that all requests, no matter what query parameters they come in with, are fully cached and served up by the Dispatcher. If you need to, you can still allow individual parameters to not be ignored like so:
/0002 { /glob "q" /type "deny" }
Please note: Before you add this change, make sure you don’t have any server-side component functionality relying on query parameters because it will no longer work properly.
4. Stop Using Query Parameters for Component Java Logic
This recommendation is an extension of the previous one in the sense that the Dispatcher does not allow caching files with query strings. In other words, you won’t see files in your cache like news.html?page=1
. Instead, you’d just see news.html
.
If you have components like the news feed example above which rely on query parameters for paging, then you’re missing out on some cache performance. To mitigate the problem, look into using selectors (e.g. news.1.html
) or suffixes (e.g. news.html/1.html
) which are both perfectly cacheable in the Dispatcher. Note, that depending on how your components are developed, this tip may not necessarily be a "quick" win.
While we’re talking about components with paging or "load more" functionality, it’s also worth pointing out that these should not have to make full-page requests. There’s no reason to re-render an entire page with header and footer just to load the next five press releases. You can use selectors on the component (e.g. /path/to/page/jcr:content/path/to/component.2.html
) combined with a JavaScript fetch call to grab the next page content for just that component.
5. Understand and Set the Dispatcher statfileslevel/invalidate Correctly
There are two other Dispatcher settings that are often misunderstood or ignored: "statfileslevel
" and "invalidate
." I’ve linked to the docs here so I won’t go into all the details but at a high level, these two properties together define which content should get invalidated when content gets published.
Adobe’s default configurations have an invalidate
rule for *.html
and a statfileslevel
of 2. That means whenever something gets published, ALL files ending with .html
in the hierarchy of the page you’ve published up to folder level 2 will be invalidated. From the docs:
"Automatic invalidation is typically used for HTML pages. HTML pages often contain links to other pages, making it difficult to determine whether a content update affects a page. To make sure that all relevant pages are invalidated when content is updated, automatically invalidate all HTML pages."
Note that while HTML pages are the main use case, this also applies to XML files and JSON files which are often configured to be invalidated alongside the HTML. For example, it may make sense to invalidate a sitemap.xml
when pages are published.
Let’s look at an example:
You’re publishing a page /content/brand-name/us/en/products/product-name
.
Folder level 2 here means /content/brand-name
.
Therefore, if you’re publishing the product-name
example page, you’re invalidating the entire cache under /content/brand-name
!
This includes all your country and regional content which has nothing to do with the page you’ve just published:
/content/brand-name/ca
/content/brand-name/fr
/content/brand-name/...
This is clearly not ideal for caching performance, so look at your content structure and then set the statfileslevel
accordingly.
So How Should You Go About Setting it Correctly?
Start by determining how much linking you are doing between pages of the site, because based on Adobe’s definition above, that’s the main reason to automatically invalidate HTML pages. If the individual sections of your site are pretty self-contained and don’t link to each other, a good statfileslevel
may be the level that those sections are at. For example:
/content/brand-name/us/en/products/product-line-1/.../...
/content/brand-name/us/en/products/product-line-2/.../...
If these pages and subpages for the product lines don’t link to each other, you could set the statfileslevel
all the way to 6. That way, if you’re making a lot of updates to one of the product lines, it doesn’t affect the cache for the other products.
However, keep in mind that this level is defined globally, so if you have other sections of your site that have more inter-linking at a higher level, you may have to play it safer and set the level to 4 instead. This would still give you a significant benefit over the default level 2 in that the language site cache is not cleared just because one of them was changed.
For more complex use cases that cannot be handled by these properties, take a look at ACS Commons Dispatcher Flush Rules.
6. Leverage a CDN
Content Delivery Networks (CDN) aim to reduce latency for downloading resources from your website by putting them physically closer to the end-user. For example, someone visiting a US-hosted website from Germany would see slower load times than someone in the US visiting the same site.
The same is true even within countries. If you’re on the West Coast and the website you’re visiting is hosted on a server on the West Coast you’d likely have a slightly better experience than someone on the East Coast. That’s where CDNs come in because they cache your website on a server that’s physically closer to the user and therefore has less latency.
I’ve found it easiest to configure the CDN so that it respects the Cache-Control headers from the AEM origin servers (i.e. AEM Publish and Dispatcher/Apache). That way, the application development team can make configuration changes in their normal fashion without having to do config changes elsewhere, especially since the CDN setup is usually owned by another team.
One easy and quick way to configure cache control headers in Apache is by using the "ExpiresByType"
directive. It allows you to set specific expiration times per mime type so that you can have your versioned CSS and JS files expire far in the future while keeping your HTML relatively fresh. Here’s an example of what the config could look like:
<IfModule mod_expires.c>
ExpiresActive on
# CSS
ExpiresByType text/css "access plus 1 year"
# JavaScript
ExpiresByType application/javascript "access plus 1 year"
ExpiresByType application/x-javascript "access plus 1 year"
ExpiresByType text/javascript "access plus 1 year"
# HTML
ExpiresByType text/html "access plus 5 minutes"
</IfModule>
This will make it so that Apache sets caching headers like the following along with the response:
cache-control: max-age=30019850, public
expires: Tue, 08 Jun 2021 10:40:14 GMT
7. Leverage Browser Cache
Even if you don’t have a CDN configured for your website, you can still leverage the Cache Control directives above. That way, your visitor’s browser will at least be able to cache the responses, and the next time they visit your site they’ll have a faster response time.
If you check in the Chrome network tab for CSS or JS requests you should see things like "Status Code: 200 (from prefetch cache)," which indicates that the browser didn’t even try to request the resource from the AEM infrastructure because it realized it still had it cached.
In some other instances—for resources without a version string—you may see a "304 Not Modified" response which means that the browser had to make a request to check if there’s a new version but then discovered that it still had the latest version. In other words, the 200 (cache) is what you’d want to strive for, with the 304 being slightly less ideal, and the plain old 200 being the slowest.
Bonus Tip: Check Your Cache Hit Ratio on the Dispatcher
If you’re using one of the latest Dispatcher versions, you can check the dispatcher.log files to see how you’re doing in terms of cache hit ratio. It will look something like this:
[Fri Jun 26 20:43:07 2020] [I] [pid 3285 (tid 140514619750144)] Current cache hit ratio: 95.69 %
If you’re embarking on a performance improvement journey, this is especially helpful in tracking your progress along the way.
Summary
Performance is a huge topic for websites and is getting increasingly more important. For sites built on AEM, there are quite a few facets to take into consideration and every implementation has its own set of constraints. However, I hope the tips I’ve outlined above can give you some quick successes and inspire you to think more about performance optimizations down the road.
If you’re looking for a deeper dive, review the Dispatcher documentation as well as Adobe’s doc on optimizing the Dispatcher cache. Adobe also published a set of Dispatcher experiments on Github that help illustrate some of the features I mentioned and lets you test them out for yourself.
No comments:
Post a Comment
If you have any doubts or questions, please let us know.