Showing posts with label Akamai. Show all posts
Showing posts with label Akamai. Show all posts

May 5, 2020
Estimated Post Reading Time ~

AEM & Akamai Integration



To better understand the integration between both technologies, let’s have first a quick overview on what is Akamai.

WHAT IS AKAMAI?
Akamai is a CDN (Content Delivery Network) that has servers all over the world, delivering the content of a website and caching that part of the content that doesn’t need to be constantly updated.

Beyond AEM, as CDN, among Akamai’s pros we can find some remarkable points:
  1. Faster delivering content: Is not the same that an user access to the website servers than could access to Akamai. In Akamai the content is available in closer servers and cached.
  2. Better balancing of the content: apply an Internet-centric approach to global load balancing and real-time fail-over. Designed to ensure high availability and responsiveness to user requests.
  3. Safety: put another wall between the user and your website.
  4. Improve user experience: Due to previous points the user has a better experience when requests content.
INTEGRATION WITH AEM
Currently, there are no tools that helps on the integration of Akamai & AEM. Akamai can be customized, so it also depends on how you implement your website.

There are a few options that can be used to integrate Akamai with AEM (it doesn’t mean that there are others):

A first option is to let Akamai decide what is and what isn’t cached based on URL rules (that you can configure). In this model you point the DNS for your website to Akamai and it decides if the request is subject to caching or not. Requests not subject to caching then pass through to your systems (AEM dispatchers will take care if should handle own caching or serve from publish instance).

But most of the clients uses a TTL approach to flushing Akamai cache rather than trying to invalidate it because the benefits aren’t worth costs ,especially if you’re using dispatcher.

The TTL (Time To Live) tells the CDN server, in this case Akamai, ​how long to wait before checking for a changed/updated version of the file​, and tells a web browser how long it should keep a file locally cached on the computer before requesting the file again. Generally, all pages from a site has the same TTL, let’s say 10 minutes, but in every site it can be configured some paths exclude from cache or a different TTL.

For example, pages that are constantly updated, will have a lower TTL (less than 10 minutes) but pages that are only updated once a day or a week, can have a higher TTL (3 hours for example).

So the way on how it works will be:
  1. A page is activated on AEM.
  2. AEM dispatcher invalidate the cache for this document.
  3. After reach TTL again, the page is invalidated on Akamai servers.
It is hard to find the best approach to integrate AEM & Akamai, because Akamai works with “time” and AEM with manual invalidation of the content. This means that, for example, when a page is activated, it is invalidated on dispatcher as well. But on Akamai remains the older version of the page till the TTL finish again.

Disptacher configuration is also involved, because a page invalidation can invalidate more content due to the statfile (we will talk about this on other posts ). This “extra” content invalidated on dispatcher won’t be invalidated on Akamai and it would be needed to clean the cache of this content manually or using an API.

Keep working on find a way to handle it…

This is an image that represents the integration of AEM & Akamai:



PROS, CONS AND TAKE IN ACCOUNT
Some of the benefits of having Akamai working with AEM: beyond all pros that a CDN can give us, AEM will be less stressed and Publish instances/dispatchers will work less and have more stability.

On the other hand the users will have a possible delay on view on new contents and possible mis-alignments between Akamai servers due to TTL (Higher is the TTL and higher is the possibility to have misalignments on servers).

Some points to take also in account:
  1. Akamai cache also query string parameters → useful for search pages with query string (use a low TTL)
  2. Statlevel → When a page is invalidated, other pages are invalidated on dispatcher as well but not on Akamai. Two options to see the invalidation on Akamai: wait for TTL, invalidate manually or through an API.
DO YOU REALLY NEED AKAMAI?
Unless you have a lot of money and want to waste it, the thing really depends on the traffic of your website.

Use a CDN for a website that will be focused only on local target, let’s say just a country, is kind of useless. On the other hand if the website expects to hold a crowd over the world could be a good idea, improving speed, traffic, safety and user-experience.

AUTHORS
Francesco Crispiatico, Jonas Magdaleno, Marco Pasini



By aem4beginner

Optimizing AEM Site Caches

Overview
Optimizing caching within your AEM architecture is one of the quickest ways to get a big performance boost. This article focuses on explaining how to optimize the various caches that are available within an AEM architecture.

AEM Architecture and Caching
In all AEM architectures, the user encounters multiple cache layers when visiting your site. There are 4 cache layers to consider in a standard AEM architecture. This includes the Web Browser, CDN, Dispatcher, and AEM instances.


Browser Caching
The first level of cache a user encounters on a repeated visit of your site is their own browser. Caching at the browser level is commonly done via the Cache-Control: max-age=... response header. The max-age setting tells the browser how many seconds it should cache the file for before attempting to "revalidate" or request it from the site again. This concept of cache max-age is commonly referred to as "Cache Expiration" or TTL ("Time to Live").

There are various options (or "directives") within the Cache-Control header that affect how caching occurs. Here are some common directives:
1. private - the private directive in the Cache-Control header it makes it so the file would only be cached in the browser, not in intermediate caches such as CDNs. A practical use for this directive would be if your page includes personalized / user-specific content.

Example usage:
Cache-Control: max-age=300, private
2. ​s-maxage - the s-maxage directive in the Cache-Control header allows you to set a different TTL for shared caches such as CDNs. When this value is set then the browser would use what is set in max-age and other caches would respect the s-maxage setting instead.

Example usage:
Cache-Control: max-age=600, s-maxage=300

Modern browsers all support the Cache-Control header, however, some old deprecated headers exist from HTTP/1.0 which may still have an effect on caching. These headers are Expires and Pragma. If you don't need to support very old browsers then do not send those response headers.
In addition to caching, revalidation is an important concept as well. Revalidation relies on the Last-Modified (response) / If-Modified-Since (request) headers, and the ETag (response) / If-None-Match (request) headers.

Caution:
Browser testing:
When testing caching in Google Chrome, if you are testing over https and you have a self-signed certificate, nothing will get cached. Chrome won't cache responses or perform revalidation when there is an untrusted or invalid certificate.

Note on dispatcher:
There is an issue with AEM Dispatcher v4.2.3 and earlier versions where the /enableTTL only caches using max-age directive. This means that even when private or s-maxage directives are set it would still cache if max-age is set. This issue is resolved in Dispatcher 4.2.4 and later versions.

CDN Caching
A CDN or "Content Delivery Network", is a distributed network of web servers designed to cache and serve content from the location nearest to your users. This reduces network hops and distance from the user's computer to your content, thereby reducing "Round Trip Time" (RTT). RTT is the time it takes for the browser to send a request to your site and receive a response. Competition in the CDN provider space has made CDNs very cost effective. This makes the decision of using a CDN for your site an easy one. If you are not using a CDN yet, then you should definitely incorporate a CDN in your site.

There are many CDN providers, each one offers different features and configurations.

How CDN Caching Works
CDNs cache content following rules similar to browsers. They rely on the Cache-Control HTTP response header and generally fall back to the Expires header if no Cache-Control header is found.

Most CDNs provide some way to trigger a manual flush of the cache. In many cases, cache flushes have some delay (e.g. 15 minutes) in regards to propagating to all edge servers that have your files.

Optimizing CDN Usage
There are a few things to do to ensure you are caching files optimally in the CDN:
1. Use a CDN that supports the stale-while-revalidate and stale-if-error directives in the Cache-Control header.
  • stale-while-revalidate - this directive tells the CDN to serve the old (already cached) version of the file while it retrieves a new one after the cache file has expired.
  • stale-if-error - similarly, this directive tells the CDN to serve the old (already cached) version of the file when the origin responds with an error during revalidation.
2. GZip compress responses for all file types that are not pre-compressed.
  1. You should do this from the dispatcher level. This will ensure that you reduce the number of bytes sent to the CDN. CDNs commonly charge by bytes transferred so compressing responses reduce cost.
  2. Enable GZip compression on the Dispatcher level:
  • Apache - use mod_deflate. Be careful for mod_deflate's use of the Vary. In certain cases, the Vary header can cause the CDN and Browser to skip caching entirely.
  • Microsoft IIS - use Dynamic Compression.
  • Do not allow gzip compression of large files or files that are already compressed. Note that most image and video formats are already precompressed. Compressing them on the fly at the web server level comes at a very high cost to performance.
  • On Apache, this can be done via AddOutputFilterByType directive:
  • AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript
  • On IIS, this can be controlled via the <dynamicTypes> configuration.
3. If your CDN provider supports Edge-side Includes (ESI) then leverage this feature.
  • AEM components can be broken up using ESI. To do this, use Apache Sling Dynamic Includes or implement a custom solution.
  • It is useful where you have fairly static pages but you are serving more dynamic content in a few parts of the page. In these cases you are essentially breaking the page up into multiple CDN files. This way you can cache different parts of the page for different periods of time.
Popular CDN Providers
  1. Here's a list of some popular CDN providers:
  2. Microsoft Azure CDN
  3. Amazon Cloudfront
  4. Akamai
  5. Google Cloud CDN
  6. Rackspace CDN
  7. Max CDN
  8. Cloudflare
  9. Fastly
  10. F5 Networks CDN
  11. ... there are many more, each having different features.
Caution:
Be careful for the Vary response header. In certain cases, Vary can cause both the CDN and browser to skip caching entirely. As a general rule of thumb, avoid adding Vary except for Vary: Accept-Encoding (applied only when the response is gzip compressed). In other words, if you need to "vary" the output of a response, use a different URL.

For example, if you have different version of the HTML for mobile versus desktop, then use a different URL. This will allow CDNs and browsers to cache more effectively.

AEM Dispatcher Caching
If the CDN cache has expired, then the request would reach the AEM dispatcher cache. At this level, there are many things which can be done to optimize caching.

Since this is a larger topic, see this article for details on how to optimize the dispatcher cache.

AEM Publish Instances
At the AEM level, there are a few things that should be done to optimize the various cache layers:
1. Set the following HTTP response headers which are not set by AEM per default.
  • Cache-Control: max-age=... - To set this header, ACS Commons - Dispatcher TTL could could be used, or you could implement custom code to set it.
  • Last-Modified - If the page content is relatively static such as an article then you could set its last-modified header to the cq:lastModified date/time (last time the article was modified). However, if the page is dynamic with JCR query results contained in component content then it would be best to set it to use the current date / time.
  • ETag - if you decide to use this instead of Last-Modified, you could write a ReplicationEventListener that listens for page activations and generates an md5 hash of the page content. This could be set as a property on the jcr:content node of the page on the author instance. When pages are replicated, it would be sent to the publish instances. For page components with content that is relatively static, this could work ok, however if the page is somewhat dynamic or references a lot content then ETag would have to be omitted (or calculated).
2. If the site has personalized / dynamic content:
Use Apache Sling Dynamic Includes to break up the content so that different parts of the page can be cached for different periods of time.
The caching can be done with the following technologies:
Edge-side Includes (ESI) on the CDN
Server-side Includes (SSI) on the web server
Or, Asynchronous Javascript and XML (AJAX) on the browser
Components on the page can be broken up into separate requests which can be cached for different periods of time. Parts of the page that are relatively static could be cached for much longer periods of time.
Consider using the HTTP Cache feature of ACS Commons.

3. Optimize client libraries.
Enable minification on client libraries.
If files in the client library are pre-minified, then disable minification on that client library. Edit the cq:ClientLibraryFolder node:
Set property jsProcessor of type String[] with value min:none
and set property cssProcessor of type String[] with value min:none
See more details, here.
Embed client libraries to minify and reduce js and css files.
Implement versioned-clientlibs from ACS Commons to allow the CDN and Dispatcher to cache js and css files for longer periods of time.



By aem4beginner

AEM Solution: How to Clear dispatcher cache by myself?

In any web application, The caching has significant value in the overall performance of the application. But, At the same time, Developers like to test their changes & able to clear the cache frequently. The caching of AEM CMS content caching happens two places: Web Server (i.e apache) & CDN (i.e Akamai) Server. However, AEM comes with dispatcher module within the webserver to handle caching request coming from AEM author environment.

Basically, Whenever content author activates any content page path from the AEM author environment, A HTTP request goes to AEM publish server to trigger an event for another HTTP push request to the dispatcher module (via webserver). This dispatcher push request purge the cache of a requested path by changing the timestamp of state file. Anyway, An explanation isn’t required about how dispatcher works? We can skip that part.

AEM Cache clearing totally depends on content paths & type of the pages. For example, if you want to clear the cache of any AEM Page / Image, you can just publish the same AEM Page / Image from AEM author & cache gets refreshed provided dispatcher module is configured on the publishing server.

Problems in cache clearing
It may seem easy clearing the cache of AEM Pages & Assets. In the following cases, It is very problematic in many cases. Some of them listed here.

Clearing the cache of Javascript minified file. Path of the file & client libs does not match at all.
Clearing the cache of a content request which is a servlet path but do not existing in real content hierarchy. /bin/myapp/servlet/abc.html
Clearing cache of vanity url.
Clearing cache of url which has different path but AEM mapping helps to resolve the path. For instance, Live url is /myapp/abc/xyz.html but content hierarchies are /content/myapp/en/1/abc/xyz.html
 

 What are the traditional solutions?
Ask someone who has access to login to web server & clears the cache manually. But here is the catch. How many times you can ask for it if you are testing your javascript code.
Run curl command to clear the cache but for this, You need to know web servers dispatcher IP/ domain etc. And if there are multiple web servers then you have to clear the cache of one server at a time.
Run Jenkins job which may clear all the cache. And it could be problematic if you do it in stage or prod.
Easiest Solution
All the above problems are not that bad & there are solutions to it. However, As a developer, I would like to have quick & an easy way to clear the cache by myself. The AEM dispatcher module purges the cache based on the path. And to use this feature, You can clear the cache of any file/Path/Assets. Following below steps to clear cache without anyone help.

Let’s take an example of purging the cache of your minified javascript file. Path of the file is /etc/designs/myapp/core.mini.js

Create a file with the same name & path.
Activate the same file.
The Dispatcher would update the cache file & start referring to your dummy file found as a new file.
De-activate the same dummy file right away. This is required because Your dummy file will not have correct content or code. So, Make sure you de-activate the same file again.
Once the file is de-activated, ClientLibs or AEM path resolution will happen as normal.
You can delete the same file in the author for future purpose or you can delete it. Above solution works with any other path or file. Be it a JSON, XML, HTML etc. The only condition is that the path which you want to clear from the cache has to be created in the author first.
Finally Thoughts

Source: https://followcybersecurity.com/2019/03/09/aem-solution-how-to-clear-dispatcher-cache-by-myself/



By aem4beginner

Akamai Cache Purge in AEM through Java code


REQUIREMENT: When a content author publishes/activates/replicates an item on AEM author that item should be immediately available on the live website, i.e. without having to explicitly request for a cache clear on the dispatcher and Akamai layers.

How to invalidate the dispatcher cache as soon as the item gets published?

We can achieve this by creating a Dispatcher flush Agent. You can refer doc from Adobe: https://docs.adobe.com/content/help/en/experience-manager-dispatcher/using/configuring/page-invalidate.html or you can follow the below blog: https://www.cqtutorial.com/courses/cq-admin/cq-admin-lessons/cq-dispatcher/cq-dispatcher-flush-agent-set-up

How to invalidate/clear the Akamai cache through Java code?


For this, we need to use Fast Purge API v3. This API helps purge content from your edge servers by URL, ARL, content provider (CP) code, or cache tag. However, currently for our case, we need to focus on purging content by URL only since we need to purge Akamai cache specifically for the pages that are published by the author.

Before beginning this process, you'll need a set of credentials that are generated through the Akamai Luna Control Center. This is done by browsing to the Configure -> Manage APIs -> Identity Management section of the Luna portal. Luna will provide you with the four pieces of credential and authorization information that you need to copy down: Client Token, Client Secret, Base URL, and Access Token. You can keep these values in an OSGi config file to retrieve them in our Akamai Purge Service. A sample OSGi config is shown below:




The following sample code demonstrates a custom replication agent that purges Akamai CDN (Content Delivery Network) cached content.

There are three aspects to this implementation: the transport handler, the content builder, and the replication agent.

1. Transport Handler: TransportHandler implementations control the communication with the destination server and determine when to report back a positive ReplicationResult to complete the activation and when to report back a negative ReplicationResult returning the activation to the queue. The transport handler service determines which transport handler to use based on the overridden canHandle method. A replication agent configured with a "Transport URI" that begins with http:// or https:// will be handled by AEM's HTTP transport handler. To customize it we need to create a unique custom URL protocol/scheme and have your transport handler's canHandle method watch for Transport URIs that start with your URL scheme. 
In this example, the transport handler is activated on the akamai:// scheme and uses the Akamai API Fast Purge API. (Few OOTB/default AEM replication agents use http://, static://, tnt://, s7delivery:// and repo:// schemes).

package com.myproject.bundle.core.services.impl;

import java.io.IOException;
import java.net.URI;
import java.nio.charset.Charset;

import org.apache.commons.io.IOUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.http.HttpStatus;
import org.apache.http.entity.ContentType;
import org.apache.jackrabbit.util.Base64;
import org.apache.sling.api.resource.ValueMap;
import org.apache.sling.commons.json.JSONArray;
import org.apache.sling.commons.json.JSONException;
import org.apache.sling.commons.json.JSONObject;
import org.apache.sling.commons.osgi.PropertiesUtil;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.akamai.edgegrid.signer.ClientCredential;
import com.akamai.edgegrid.signer.exceptions.RequestSigningException;
import com.akamai.edgegrid.signer.googlehttpclient.GoogleHttpClientEdgeGridRequestSigner;
import com.day.cq.replication.AgentConfig;
import com.day.cq.replication.ReplicationActionType;
import com.day.cq.replication.ReplicationException;
import com.day.cq.replication.ReplicationResult;
import com.day.cq.replication.ReplicationTransaction;
import com.day.cq.replication.TransportContext;
import com.day.cq.replication.TransportHandler;
import com.myproject.bundle.core.configuration.BaseConfigurationService;
import com.myproject.bundle.core.constants.MyConstants;
import com.myproject.bundle.core.search.services.MyProjectConfigurationService;
import com.google.api.client.http.ByteArrayContent;
import com.google.api.client.http.GenericUrl;
import com.google.api.client.http.HttpHeaders;
import com.google.api.client.http.HttpRequest;
import com.google.api.client.http.HttpRequestFactory;
import com.google.api.client.http.HttpResponse;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.apache.ApacheHttpTransport;

/**
 * Transport handler to send test and purge requests to Akamai and handle
 * responses. The handler sets up basic authentication with the user/pass from
 * the replication agent's transport config and sends a GET request as a test
 * and POST as purge request. A valid test response is 200 while a valid purge
 * response is 201.
 * 
 * The transport handler is triggered by setting your replication agent's
 * transport URL's protocol to "akamai://".
 *
 * The transport handler builds the POST request body in accordance with
 * Akamai's Fast Purge REST API {@link https://developer.akamai.com/api/core_features/fast_purge/v3.html}
 * using the replication agent properties. 
 */
@Component(service = TransportHandler.class, immediate = true)
public class AkamaiTransportHandler implements TransportHandler {

/**The Solr Server Configuration Service.*/
@Reference
MyProjectConfigurationService myProjectConfigurationService;
@Reference
BaseConfigurationService baseConfigurationService;
/**Logger Instantiation for Akamai Transport Handler*/
private static final Logger LOGGER = LoggerFactory.getLogger(AkamaiTransportHandler.class);
    
    /** Protocol for replication agent transport URI that triggers this transport handler. */
    private static final String AKAMAI_PROTOCOL = "akamai://";

    /**Config Pid for Akamai Flush*/
    private static final String AKAMAI_FLUSH_CONFIG_PID = "com.myproject.bundle.core.configuration.AkamaiFlushConfiguration";
    
    /** Replication agent type property name. Valid values are "arl" and "cpcode". */
    private static final String PROPERTY_AKAMAI_TYPE = "type";

    /** Replication agent multifield CP Code property name.*/
    private static final String PROPERTY_AKAMAI_CP_CODES = "4321xxx";

    /** Replication agent domain property name. Valid values are "staging" and "production". */
    private static final String PROPERTY_AKAMAI_DOMAIN = "domain";

    /** Replication agent action property name. Valid values are "remove" and "invalidate". */
    private static final String PROPERTY_AKAMAI_ACTION = "action";

    /** Replication agent default type value */
    private static final String PROPERTY_AKAMAI_TYPE_DEFAULT = "url";

    /** Replication agent default domain value */
    private static final String PROPERTY_AKAMAI_DOMAIN_DEFAULT = "production";

    /** Replication agent default action value */
    private static final String PROPERTY_AKAMAI_ACTION_DEFAULT = "invalidate";
    
    /**Transport URI*/
    private static final String TRANSPORT_URI = "transportUri";

    /**
     * {@inheritDoc}
     */
    @Override
    public boolean canHandle(AgentConfig config) {
        final String transportURI = config.getTransportURI();

        return (transportURI != null) && (transportURI.toLowerCase().startsWith(AKAMAI_PROTOCOL));
    }

    /**
     * {@inheritDoc}
     */
    @Override
    public ReplicationResult deliver(TransportContext ctx, ReplicationTransaction tx)
            throws ReplicationException {
        final ReplicationActionType replicationType = tx.getAction().getType();

        if (replicationType == ReplicationActionType.TEST) {
        return ReplicationResult.OK;
        } else if (replicationType == ReplicationActionType.ACTIVATE ||
                replicationType == ReplicationActionType.DEACTIVATE ||
                replicationType == ReplicationActionType.DELETE) {
        LOGGER.info("Replication  Type in Akamai Handler: {}", replicationType);
        String resourcePath = tx.getAction().getPath();
        if (StringUtils.startsWith(resourcePath, myProjectConfigurationService.getContentpath()) 
        || StringUtils.startsWith(resourcePath, myProjectConfigurationService.getAssetpath())) {
//checking for my project specific root page and root dam path.
        LOGGER.info("Calling activate in Akamai for path: {}", resourcePath);
        try {
return doActivate(ctx, tx);
} catch (RequestSigningException e) {
LOGGER.error("Signing ceremony unsuccessful....");
throw new ReplicationException("Signing ceremony unsuccessful: {}", e);
} catch (IOException e) {
LOGGER.error("IO Exception in deliver \n");
throw new ReplicationException("IO Exception in deliver: {}", e);
}
        }
        return ReplicationResult.OK;
        } else {
            throw new ReplicationException("Replication action type " + replicationType + " not supported.");
        }
    }

    private String getTransportURI(TransportContext ctx) throws IOException {
LOGGER.info("Entering getTransportURI method.");
final ValueMap properties = ctx.getConfig().getProperties();
final String AKAMAI_HOST = baseConfigurationService.getPropValueFromConfiguration(AKAMAI_FLUSH_CONFIG_PID, "akamaiHost");
final String domain = PropertiesUtil.toString(properties.get(PROPERTY_AKAMAI_DOMAIN), PROPERTY_AKAMAI_DOMAIN_DEFAULT);
final String action = PropertiesUtil.toString(properties.get(PROPERTY_AKAMAI_ACTION), PROPERTY_AKAMAI_ACTION_DEFAULT);
final String type = PropertiesUtil.toString(properties.get(PROPERTY_AKAMAI_TYPE), PROPERTY_AKAMAI_TYPE_DEFAULT);
String defaultTransportUri = MyConstants.HTTPS + AKAMAI_HOST + "/ccu/v3/"
+ action + MyConstants.BACK_SLASH + type + MyConstants.BACK_SLASH + domain;
String transporturi = PropertiesUtil.toString(properties.get(TRANSPORT_URI), defaultTransportUri);
if(StringUtils.isEmpty(transporturi)) {
return defaultTransportUri;
}
if (transporturi.startsWith(AKAMAI_PROTOCOL)) {
transporturi = transporturi.replace(AKAMAI_PROTOCOL, MyConstants.HTTPS);
}
transporturi =  transporturi + "/ccu/v3/"
+ action + MyConstants.BACK_SLASH + type + MyConstants.BACK_SLASH + domain;
LOGGER.info("Exiting getTransportURI method of Akamai Transport Handler : {}", transporturi);
return transporturi;
}

    /**
     * Send purge request to Akamai via a POST request
     *
     * Akamai will respond with a 201 HTTP status code if the purge request was
     * successfully submitted.
     *
     * @param ctx Transport Context
     * @param tx Replication Transaction
     * @return ReplicationResult OK if 201 response from Akamai
     * @throws ReplicationException
     * @throws RequestSigningException 
     * @throws IOException 
     * @throws JSONException 
     */
    private ReplicationResult doActivate(TransportContext ctx, ReplicationTransaction tx)
            throws ReplicationException, RequestSigningException, IOException {
    LOGGER.info("Inside doActivate of Akamai");
        final String AKAMAI_ACCESS_TOKEN = baseConfigurationService.getPropValueFromConfiguration(AKAMAI_FLUSH_CONFIG_PID, "akamaiAccessToken");
        final String AKAMAI_CLIENT_TOKEN = baseConfigurationService.getPropValueFromConfiguration(AKAMAI_FLUSH_CONFIG_PID, "akamaiClientToken");
        final String AKAMAI_CLIENT_SECRET = baseConfigurationService.getPropValueFromConfiguration(AKAMAI_FLUSH_CONFIG_PID, "akamaiClientSecret");
        final String AKAMAI_HOST = baseConfigurationService.getPropValueFromConfiguration(AKAMAI_FLUSH_CONFIG_PID, "akamaiHost");
        
        ClientCredential clientCredential = ClientCredential.builder().accessToken(AKAMAI_ACCESS_TOKEN).
        clientToken(AKAMAI_CLIENT_TOKEN).clientSecret(AKAMAI_CLIENT_SECRET).host(AKAMAI_HOST).build();
   
        HttpTransport httpTransport = new ApacheHttpTransport();
        HttpRequestFactory httpRequestFactory = httpTransport.createRequestFactory();

        JSONObject jsonObject = createPostBody(ctx, tx);        

        URI uri = URI.create(getTransportURI(ctx));

        HttpRequest request = httpRequestFactory.buildPostRequest(new GenericUrl(uri), ByteArrayContent.fromString("application/json", jsonObject.toString()));
        final HttpResponse response = sendRequest(request, ctx, clientCredential);
        
        if (response != null) {
            final int statusCode = response.getStatusCode();
            LOGGER.info("Response code recieved: {}", statusCode);
            if (statusCode == HttpStatus.SC_CREATED) {
                return ReplicationResult.OK;
            }
        }
        return new ReplicationResult(false, 0, "Replication failed");
    }

    /**
     * Build preemptive basic authentication headers and send the request.
     *
     * @param request The request to send to Akamai
     * @param ctx The TransportContext containing the username and password
     * @return JSONObject The HTTP response from Akamai
     * @throws ReplicationException if a request could not be sent
     * @throws RequestSigningException 
     */
    private HttpResponse sendRequest(final HttpRequest request, final TransportContext ctx,
            ClientCredential clientCredential)
            throws ReplicationException, RequestSigningException {

    LOGGER.info("Inside Send Request method of Akamai");
    final String auth = ctx.getConfig().getTransportUser() + ":" + ctx.getConfig().getTransportPassword();
        final String encodedAuth = Base64.encode(auth);
        
    HttpHeaders httpHeaders = new HttpHeaders();
    httpHeaders.setAuthorization("Basic " + encodedAuth);
         httpHeaders.setContentType(ContentType.APPLICATION_JSON.getMimeType());
         request.setHeaders(httpHeaders);

         GoogleHttpClientEdgeGridRequestSigner requestSigner = new GoogleHttpClientEdgeGridRequestSigner(clientCredential);
         requestSigner.sign(request);
         
         HttpResponse response;

         try {
             response = request.execute();
         } catch (IOException e) {
        LOGGER.error("IO Exception in sendRequest");
             throw new ReplicationException("Could not send replication request.", e);
         }
         LOGGER.info("Sucessfully executed Send Request for Akamai");
         return response;
    }

    /**
     * Build the Akamai purge request body based on the replication agent
     * settings and append it to the POST request.
     *
     * @param request The HTTP POST request to append the request body
     * @param ctx TransportContext
     * @param tx ReplicationTransaction
     * @throws ReplicationException if errors building the request body 
     */
    private JSONObject createPostBody(final TransportContext ctx,
            final ReplicationTransaction tx) throws ReplicationException {
    final ValueMap properties = ctx.getConfig().getProperties();
        final String type = PropertiesUtil.toString(properties.get(PROPERTY_AKAMAI_TYPE), PROPERTY_AKAMAI_TYPE_DEFAULT);
    JSONObject json = new JSONObject();
        JSONArray purgeObjects = null;
        
        if (type.equals(PROPERTY_AKAMAI_TYPE_DEFAULT)) {
        try {
                String content = IOUtils.toString(tx.getContent().getInputStream(), Charset.defaultCharset());

                if (StringUtils.isNotBlank(content)) {
                LOGGER.info("Content of Akamai is:\n {}", content);
                    purgeObjects = new JSONArray(content);
                }
            } catch (JSONException | IOException e) {
            throw new ReplicationException("Could not retrieve content from content builder", e);
            }
        }
        if (null != purgeObjects && purgeObjects.length() > 0) {
            try {
                json.put("objects", purgeObjects);
            } catch (JSONException e) {
            throw new ReplicationException("Could not build purge request content", e);
            }
        } else {
            throw new ReplicationException("No CP codes or pages to purge");
        }
        return json;
    }
}

I needed to add the following dependencies :
<Import-Package>

        javax.annotation;version=0.0.0,
</Import-Package>
     
        <dependency>
    <groupId>com.akamai.edgegrid</groupId>
    <artifactId>edgegrid-signer-google-http-client</artifactId>
    <version>4.0.0</version>
</dependency>
<dependency>
    <groupId>com.akamai.edgegrid</groupId>
    <artifactId>edgegrid-signer-core</artifactId>
    <version>4.0.0</version>
</dependency>
<dependency>
    <groupId>com.google.http-client</groupId>
    <artifactId>google-http-client</artifactId>
    <version>1.22.0</version>
</dependency>
<dependency>
    <groupId>io.opencensus</groupId>
    <artifactId>opencensus-api</artifactId>
    <version>0.24.0</version>
</dependency>
<dependency>
    <groupId>io.opencensus</groupId>
    <artifactId>opencensus-contrib-http-util</artifactId>
    <version>0.24.0</version>
</dependency>
<dependency>
    <groupId>io.grpc</groupId>
    <artifactId>grpc-context</artifactId>
    <version>1.24.0</version>
        </dependency>

2. Content Builder: ContentBuilder implementations build the body of the replication request. Implementations of the ContentBuilder interface end up as serialization options in the replication agent configuration dialog. While creating our custom replication agent, we will need to select the Serialization Type option as "My Project Akamai Purge Content Builder".

package com.myproject.bundle.core.services.impl;


import java.io.BufferedWriter;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.HashMap;
import java.util.Map;

import javax.jcr.Session;

import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.resource.LoginException;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
import org.apache.sling.api.resource.ResourceResolverFactory;
import org.apache.sling.commons.json.JSONArray;
import org.apache.sling.jcr.resource.JcrResourceConstants;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.day.cq.commons.Externalizer;
import com.day.cq.replication.ContentBuilder;
import com.day.cq.replication.ReplicationAction;
import com.day.cq.replication.ReplicationContent;
import com.day.cq.replication.ReplicationContentFactory;
import com.day.cq.replication.ReplicationException;
import com.day.cq.wcm.api.Page;
import com.day.cq.wcm.api.PageManager;
import com.myproject.bundle.core.constants.FsbpConstants;
import com.myproject.bundle.core.search.services.MyProjectConfigurationService;

/**
 * Akamai content builder to create replication content containing a JSON array
 * of URLs for Akamai to purge through the Akamai Transport Handler. This class
 * takes the internal resource path and converts it to external URLs as well as
 * adding vanity URLs and pages that may Sling include the activated resource.
 */
@Component(service = ContentBuilder.class,
   property = {"name=myProjectAkamai", "value=akamai"},
   immediate = true)
public class AkamaiContentBuilder implements ContentBuilder {

    @Reference
    private ResourceResolverFactory resolverFactory;

    /** The name of the replication agent */
    public static final String NAME = "myProjectAkamai";

    /**
     * The serialization type as it will display in the replication
     * agent edit dialog selection field.
     */
    public static final String TITLE = "My Project Akamai Purge Content Builder";
    
    @Reference
MyProjectConfigurationService myProjectConfigurationService;
    
    private static final Logger LOG = LoggerFactory.getLogger(AkamaiContentBuilder.class);

    /**
     * {@inheritDoc}
     */
    @Override
    public ReplicationContent create(Session session, ReplicationAction action,
            ReplicationContentFactory factory) throws ReplicationException {
        return create(session, action, factory, null);
    }

    /**
     * Create the replication content containing the public facing URLs for
     * Akamai to purge.
     */
    @Override
    public ReplicationContent create(Session session, ReplicationAction action,
            ReplicationContentFactory factory, Map<String, Object> parameters)
            throws ReplicationException {

        final String path = action.getPath();

        ResourceResolver resolver = null;
        JSONArray jsonArray = new JSONArray();

        if (StringUtils.isNotBlank(path)) {
        HashMap<String, Object> sessionMap = new HashMap<>();
            sessionMap.put(JcrResourceConstants.AUTHENTICATION_INFO_SESSION, session);
            try {
            resolver = resolverFactory.getResourceResolver(sessionMap);
            
        if (StringUtils.contains(path, myProjectConfigurationService.getContentpath())) { // my project's specific page root path - /content/myproject
        jsonArray = createPageContent(resolver, jsonArray, path);
        } else if (StringUtils.contains(path, myProjectConfigurationService.getAssetpath())) { // my project's specific dam root path - /content/dam/myproject
        jsonArray = createAssetContent(resolver, jsonArray, path);
        }
            } catch (LoginException e) {
            LOG.error("Could not retrieve Page Manager", e);
            }
            return createContent(factory, jsonArray);
        }

        return ReplicationContent.VOID;
    }

    private JSONArray createPageContent(ResourceResolver resolver, JSONArray jsonArray, String path) {
    PageManager pageManager = resolver.adaptTo(PageManager.class);
if (null != pageManager) {
    Page purgedPage = pageManager.getContainingPage(path);
    Externalizer externalizer = resolver.adaptTo(Externalizer.class);
if (null != purgedPage && null != externalizer) {
final String link = externalizer.externalLink(resolver, "firestonebpco", path);
jsonArray.put(link);
LOG.info("Page link added: {}", link);
    final String vanityUrl = purgedPage.getVanityUrl();
    if (StringUtils.isNotBlank(vanityUrl)) {
        jsonArray.put(vanityUrl);
        LOG.info("Vanity URL added: {}", vanityUrl);
    }
} else {
    jsonArray.put(path);
    LOG.info("Page Resource path added: {}", path);
}
}
return jsonArray;
    }
    
    private JSONArray createAssetContent(ResourceResolver resolver, JSONArray jsonArray, String path) {
    Resource purgedAssetResource = resolver.getResource(path);
    Externalizer externalizer = resolver.adaptTo(Externalizer.class);
if (null != purgedAssetResource && null != externalizer) {
final String assetLink = externalizer.externalLink(resolver, "firestonebpco", path);
jsonArray.put(assetLink);
LOG.info("Asset link added: {}", assetLink);
} else {
    jsonArray.put(path);
    LOG.info("Asset Resource path added: {}", path);
}
return jsonArray;
    }
    
    /**
     * Create the replication content containing 
     *
     * @param factory Factory to create replication content
     * @param jsonArray JSON array of URLS to include in replication content
     * @return replication content
     *
     * @throws ReplicationException if an error occurs
     */
    private ReplicationContent createContent(final ReplicationContentFactory factory,
            final JSONArray jsonArray) throws ReplicationException {

        Path tempFile;

        try {
            tempFile = Files.createTempFile("akamai_purge_agent", ".tmp");
        } catch (IOException e) {
            throw new ReplicationException("Could not create temporary file", e);
        }

        try (BufferedWriter writer = Files.newBufferedWriter(tempFile, Charset.forName("UTF-8"))) {
            writer.write(jsonArray.toString());
            writer.flush();

            return factory.create("text/plain", tempFile.toFile(), true);
        } catch (IOException e) {
            throw new ReplicationException("Could not write to temporary file", e);
        }
    }

    /**
     * {@inheritDoc}
     *
     * @return {@value #NAME}
     */
    @Override
    public String getName() {
        return NAME;
    }

    /**
     * {@inheritDoc}
     *
     * @return {@value #TITLE}
     */
    @Override
    public String getTitle() {
        return TITLE;
    }
}

This content builder implementation can be done away with and the logic for making content objects could have been completed in the transport handler itself as the transport handler is creating its own request anyways. However, if you need the session you need to implement the ContentBuilder since TransportHandler implementations do not provide that leverage.

3. Replication Agent: Create a custom cq:Template as well as a corresponding cq:Component including the view and dialog. The easiest way to do this is to copy the default replication agent from /libs/cq/replication/templates/agent and /libs/cq/replication/components/agent to /apps/your-project/replication and update the agent like any other AEM component.

Our Akamai replication agent component inherits from the default replication agent by setting the sling:resourceSuperType to cq/replication/components/agent. The only update needed to the copied component is the dialog options and the agent.jsp file as it contains the JS code to open the dialog for which you need to update the path. ValueMap. Following is the agent.jsp and dialog.xml code: 

<%@page session="false"%><%--

  Copyright 1997-2009 Day Management AG
  Barfuesserplatz 6, 4001 Basel, Switzerland
  All Rights Reserved.

  This software is the confidential and proprietary information of
  Day Management AG, ("Confidential Information"). You shall not
  disclose such Confidential Information and shall use it only in
  accordance with the terms of the license agreement you entered into
  with Day.

=================================================

  Agent component
  Displays information about a replication agent.

--%><%@page contentType="text/html"
            pageEncoding="utf-8"
            import="com.day.cq.replication.Agent,
                    com.day.cq.replication.AgentConfig,
                    com.day.cq.replication.AgentManager,
                    com.day.cq.replication.ReplicationQueue,
                    com.adobe.granite.ui.clientlibs.HtmlLibraryManager,
                    com.day.cq.i18n.I18n" %><%
%><%@include file="/libs/foundation/global.jsp"%><%
    I18n i18n = new I18n(slingRequest);
    String id = currentPage.getName();
    String title = properties.get("jcr:title", id);  // user generated content, no i18n

    AgentManager agentMgr = sling.getService(AgentManager.class);
    Agent agent = agentMgr.getAgents().get(id);
    AgentConfig cfg = agent == null ? null : agent.getConfiguration();

    if (cfg == null || !cfg.getConfigPath().equals(currentNode.getPath())) {
        // agent not active
        agent = null;
    }

    // get icons
    String globalIcnCls = "cq-agent-header";
    String statusIcnCls = "cq-agent-status";
    if (agent == null) {
        statusIcnCls += "-inactive";
        globalIcnCls += "-off";
    } else {
        try {
            agent.checkValid();
            if (agent.isEnabled()) {
                globalIcnCls += "-on";
                statusIcnCls += "-ok";
            } else {
                globalIcnCls += "-off";
                statusIcnCls += "-disabled";
            }
        } catch (IllegalArgumentException e) {
            globalIcnCls += "-off";
            statusIcnCls += "-invalid";
        }
    }

%><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html>
<head>
    <title><%= i18n.get("AEM Replication") %> | <%= xssAPI.encodeForHTML(title) %></title>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <%
    HtmlLibraryManager htmlMgr = sling.getService(HtmlLibraryManager.class);
    if (htmlMgr != null) {
        htmlMgr.writeIncludes(slingRequest, out, "cq.wcm.edit", "cq.replication");
    }

    %>
    <script src="/libs/cq/ui/resources/cq-ui.js" type="text/javascript"></script>
</head>
<body>
    <h2 class="<%= globalIcnCls %>"><%= xssAPI.encodeForHTML(title) %> (<%= xssAPI.encodeForHTML(id) %>)</h2>
    <%
        String description = properties.get("jcr:description", "");  // user generated content, no i18n
            %><p><%= xssAPI.encodeForHTML(description) %></p><%
    %><div id="agent-details" class="cq-replication-agent-details"><cq:include path="<%= xssAPI.encodeForHTMLAttr(resource.getPath()) + ".details.html" %>" resourceType="<%= xssAPI.encodeForHTMLAttr(resource.getResourceType()) %>"/></div>
    <div>
    <br>
    <%
        // draw the 'edit' bar explicitly. since we want to be able to edit the
        // settings on publish too. we are too late here for setting the WCMMode.
        /*
        out.flush();
        if (editContext != null) {
            editContext.getEditConfig().getToolbar().add(0, new Toolbar.Label("Settings"));
            editContext.includeEpilog(slingRequest, slingResponse, WCMMode.EDIT);
        }
        */

    %>
        <script type="text/javascript">
        CQ.WCM.edit({
            "path":"<%= xssAPI.encodeForHTML(resource.getPath()) %>",
            "dialog":"/apps/fsbp/components/content/agent/dialog",
            "type":"fsbp/components/content/agent",
            "editConfig":{
                "xtype":"editbar",
                "listeners":{
                    "afteredit":"REFRESH_PAGE"
                },
                "inlineEditing":CQ.wcm.EditBase.INLINE_MODE_NEVER,
                "disableTargeting": true,
                "actions":[
                    {
                        "xtype":"tbtext",
                        "text":"Settings"
                    },
                    CQ.wcm.EditBase.EDIT
                ]
            }
        });
        </script>
    </div>

    <%
        if (agent != null) {
    %>
    <div id="CQ">
        <div id="cq-queue">
        </div>
    </div>

    <script type="text/javascript">
        function reloadDetails() {
            var url = CQ.HTTP.externalize("<%= xssAPI.encodeForHTML(currentPage.getPath()) %>.details.html");
            var response = CQ.HTTP.get(url);
            if (CQ.HTTP.isOk(response)) {
                document.getElementById("agent-details").innerHTML = response.responseText;
            }
        }

        CQ.Ext.onReady(function(){
            var queue = new CQ.wcm.ReplicationQueue({
                url: "<%= xssAPI.encodeForHTML(currentPage.getPath()) %>/jcr:content.queue.json",
                applyTo: "cq-queue",
                height: 400
            });
            queue.on("afterrefresh", function(queue) {
                reloadDetails();
            });
            queue.on("aftercleared", function(queue) {
                reloadDetails();
            });
            queue.on("afterretry", function(queue) {
                reloadDetails();
            });
            queue.loadAgent("<%= xssAPI.encodeForHTML(id) %>");
        });

        function test() {
            CQ.shared.Util.open(CQ.HTTP.externalize('<%= xssAPI.encodeForHTML(currentPage.getPath()) %>.test.html'));
        }
    </script>
    <%
        } // if (agent != null)
    %>
</body>
</html>

dialog.xml:

<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:cq="http://www.day.com/jcr/cq/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0"
    jcr:primaryType="cq:Dialog"
    height="512"
    title="Agent Settings">
    <items jcr:primaryType="cq:WidgetCollection">
        <tabs jcr:primaryType="cq:TabPanel">
            <items jcr:primaryType="cq:WidgetCollection">
                <tab1
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_agent.infinity.json"
                    xtype="cqinclude"/>
                <tabAkamai
                    jcr:primaryType="cq:Widget"
                    path="/apps/fsbp/components/content/agent/tab_akamai.infinity.json"
                    xtype="cqinclude"/>
                <tab2
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_transport.infinity.json"
                    xtype="cqinclude"/>
                <tab3
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_proxy.infinity.json"
                    xtype="cqinclude"/>
                <tab4
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_extended.infinity.json"
                    xtype="cqinclude"/>
                <tab5
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_triggers.infinity.json"
                    xtype="cqinclude"/>
                <tab6
                    jcr:primaryType="cq:Widget"
                    path="/libs/cq/replication/components/agent/tab_batch.infinity.json"
                    xtype="cqinclude"/>
            </items>
        </tabs>
    </items>
</jcr:root>

tab_akami.xml:
<?xml version="1.0" encoding="UTF-8"?>

<jcr:root xmlns:cq="http://www.day.com/jcr/cq/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"
    jcr:primaryType="cq:Panel"
    title="Proxy">
    <items jcr:primaryType="cq:WidgetCollection">
        <type
            jcr:primaryType="cq:Widget"
            defaultValue="arl"
            fieldDescription="Selecting &amp;quot;URLs/ARLs&amp;quot; will instruct Akamai to take action on the resources in the activation request. When purging by &amp;quot;CP codes&amp;quot; resources in the activation request are not considered."
            fieldLabel="Type"
            name="./type"
            type="select"
            xtype="selection">
            <options jcr:primaryType="cq:WidgetCollection">
                <url
                    jcr:primaryType="nt:unstructured"
                    text="URLs/ARLs"
                    value="url"/>
                <cpcode
                    jcr:primaryType="nt:unstructured"
                    text="CP codes"
                    value="cpcode"/>
            </options>
        </type>
        <cpCodes
            jcr:primaryType="cq:Widget"
            fieldDescription="CAUTION: Purging by CP code can significantly slow your origin server as Edge servers may need to refetch large amounts of data. Purging multiple CP codes may magnify this effect."
            fieldLabel="CP Codes"
            name="./akamaiCPCodes"
            xtype="multifield"/>
        <domain
            jcr:primaryType="cq:Widget"
            defaultValue="production"
            fieldLabel="Domain"
            name="./domain"
            type="select"
            xtype="selection">
            <options jcr:primaryType="cq:WidgetCollection">
                <production
                    jcr:primaryType="nt:unstructured"
                    text="production"
                    value="production"/>
                <staging
                    jcr:primaryType="nt:unstructured"
                    text="staging"
                    value="staging"/>
            </options>
        </domain>
        <action
            jcr:primaryType="cq:Widget"
            defaultValue="remove"
            fieldDescription="&amp;quot;Remove&amp;quot; deletes the content from Edge server caches. The next time an Edge server receives a request for the content, it will retrieve the current version from the origin server. &amp;quot;Invalidate&amp;quot; marks the cached content as invalid. The next time a server receives a request for the content, it sends an HTTP conditional GET (If-Modified-Since) request to the origin. If the content has changed, the origin server returns a full fresh copy. Otherwise, the origin normally responds that the content has not changed, and the Edge server can serve the already-cached content."
            fieldLabel="Action"
            name="./action"
            type="select"
            xtype="selection">
            <options jcr:primaryType="cq:WidgetCollection">
                <remove
                    jcr:primaryType="nt:unstructured"
                    text="remove"
                    value="delete"/>
                <invalidate
                    jcr:primaryType="nt:unstructured"
                    text="invalidate"
                    value="invalidate"/>
            </options>
        </action>
    </items>
</jcr:root>

You can create you custom replication agent on agents.author by selecting your newly overridden replication template, say "Akamai Purge Agent":


Now configure your Replication Agent. Select "My Project Akamai Purge Content Builder" from the Serialization dropdown.


Configure other tabs as shown below:


Configure the URL in the Transport tab starting with "akamai://" (we provided in AkamaiTransportHandler.java code) followed by the base URL that we got at the beginning (also present in the config file as akamaiHost field).



By aem4beginner