Build an App that Handles Caching with Fastify and AWS CloudFront

We all want to build blazingly fast applications and one of the most effective ways to speed them up is to use caching.

In large web applications it’s common to adopt a Content Delivery Network (CDN). That introduces many advantages, from reducing latency to decreasing server load, to drastically improving performance by adopting HTTP caching.

We’ll use fastify as a web server and Amazon CloudFront as CDN, in order to optimize HTTP response times via HTTP caching headers.

The architecture of this case is quite simple: the client -> the CDN -> the server(s).

diagram depicting http caching flow: from client to aws cloudfront to fastify

You can access the examples referred to in this article at the following link:

HTTP cache

HTTP cache has been defined in the protocol since the beginning. It can be used to reduce server calls and data transfer as well as to avoid repeated operations that have the same results.

Two core concepts of caching are saving and updating operation results. Saving the results of operations is done to serve them quickly and avoid repeating computations that lead to the same outcome. Updating those saved results when they become stale is essential in a useful cache.

HTTP cache can be time-based or content-based – and of course, some content can’t be cached or is prohibitively difficult to cache.

Time-based caching
Time-based caching is quite simple to implement: setting an arbitrary expiration to cache entries, and when that time passes, the content is reloaded from the source. This strategy is the most efficient from the client’s perspective. Once the content is received, no more requests are made for a specific time; at the same time, consistently orchestrating client-side requests is not that easy.
Content-based caching
Content-based caching is different: the client gets Etag and/or Last-Modified  headers that identify the response; the client will provide them to the next requests for the same resource, and the server either responds with new content and updated headers if the resource has changed meanwhile, or with a “304 – not modified” if not, without sending the content again.

This is just an overview of HTTP caching, for more information see HTTP caching on MDN.

Choosing the right strategy to adopt really depends on many factors, of which the very first is business requirements.

HTTP headers

Cache-Control is the main header for caching directives, both for request and response. In this use case, we focus on max-age and s-maxage, respectively, to tell the client and CDN the time-to-live (TTL) for a resource or no-cache to avoid caching. For more information see Cache-Control on MDN

Vary response header describes the parts of the request headers that are involved in the cache. For more information see Vary on MDN
Etag (entity tag)
Etag response header is the identifier of the response. If the server response includes the Etag header, the client should provide the value on the following request to the same resource on the if-Match header. For more information see Etag on MDN
Last-Modified is the same concept as Etag but is based on date. If it’s contained in the response, the client should send it later as If-Modified-Since. Etag and Last-Modified can be used together. For more information see Last-Modified on MDN


We can configure different behaviours by combining the caching headers together, but generally speaking, we can categorize contents as public or private, static or dynamic.

When we combine  them, we get the following content types:

dynamic public
Dynamic content usually refers to server-rendered pages or API responses. For example, a home page optimized for client devices (mobile or desktop) or localization by client origin; an API to get the content of an article from the company CMS.
dynamic private
The same as above, but private, so content is accessible under authorisation, or the user data affects the content. For example, the rendered page may include “welcome ${}”. Because of the many parameters involved, this is the most tricky type of content to cache. Failing to properly define the parameters here can be disastrous such as serving private content to the wrong users.
static public
This type of content is usually application assets, often served efficiently by storage services like Amazon S3.
static private
This is for content that is only accessible with authorization, for example, an image sent in a chat app. The approach is similar to dynamic private content.

To force cache refresh, the simplest way is to remove the cache entries when new content is available (for example, on a new release of the frontend), otherwise, new entries are reloaded because they expire or don’t match the content identifier/s Etag and/or Last-Modified.

Amazon CloudFront

Amazon CloudFront is our CDN of choice. The capabilities of CloudFront are wide, including compressing responses, applying Lambda@Edge functions or CloudFront functions to request/response, using streaming capabilities, adding encryption and much more. The feature set is so rich that caching is not even mentioned on the first documentation page.

For our scope, we’ll focus only on caching features, using CloudFront to expose our fastify application as a reverse proxy.

CloudFront allows you to define very fine-grained policies for caching. The main concepts are:

  1. Define: the “cache key” for paths to identify requests and therefore cache entries. Cache keys always include the url and method, then part or all of the query string. Headers and cookies can be added to identify the request
  2. Use custom “CloudFront” HTTP headers to get client information such as user device and location (see the full list). for example CloudFront-is-Mobile-Viewer. Having that information ready to use on the server is very powerful!!
  3. On client response CloudFront adds an x-cache header, containing information about the use of the cache for the resource. There are four possible responses the client can deliver: “Miss”, “Hit”, “RefreshHit” or “Error”
  4. It automatically adopts Etag and/or Last-Modified if present in the server response and manages them with If-Match and If-Modified-Since in further requests.


Fastify works perfectly with CloudFront because it’s very easy to set HTTP headers and fastify also has the most efficient Etag computation in a tiny, yet amazing, plugin: fastify-etag.

When implementing time-based content strategies, Etags are not needed and can be hard to define when a resource is generated and set to Last-Modified. However, since CloudFront manages Etags so efficiently out of the box, it’s very convenient to use them anyway.

The brilliant part of the Etag plugin is that it uses the fast algorithm fnv1a to hash the response, and also automatically manage If-Match and If-Modified-Since of the request.

In our case, once the fastify server provides the Etag, CloudFront is able to handle it, serving itself the matching content, or forwarding the request to the fastify server.

Copy to Clipboard

Looking at benchmarks, Etag generation with the fnv1a algorithm is only 10% slower than without it! Considering the benefits of adopting validation based strategy this is an impressively low cost.


Let’s build a cache for a dynamic private API.

The fastify app has just a simple route with pseudo-authentication that responds with the user info and CloudFront information of request origin.

Copy to Clipboard

We’ll use cdk to set up the CloudFront Distribution named “example”, which uses the CachePolicy and the OriginRequestPolicy.

Related Read: Cloud Governance with CDK using Aspects

The CachePolicy named “private-dynamic-content” will have min and default TTL at zero, and max at 1 day; the request is identified by the value of the Authorization header and the querystring, if any.

The OriginRequestPolicy named “forward-all” will forward all the values of cookies, querystring and headers, and also will include the CloudFront headers to identify the client, such as “’CloudFront-Is-Mobile-Viewer”, “’CloudFront-Viewer-Country” and so on.

You can see the full code in the example repo.

Copy to Clipboard

Let’s see them in action.

The first call is for the user identified by the token “user:one”. The header response “x-cache” says it is missing from CloudFront, so it’s served by the fastify app.

Copy to Clipboard

Making the same request again, CloudFront serves it from its cache using the previous response, without reaching the fastify app.

Copy to Clipboard

Now let’s call the user “two”. Everything is working fine since it’s responding with the right data from the server.

Copy to Clipboard

Making the same request again, CloudFront handles it properly, serving the right content for user “two”.

Copy to Clipboard

Pitfalls & caveats

Since CloudFront manages the request and response between server and client, we must be aware of its behaviour according to HTTP headers.

CloudFront policies control the cache on the CDN over the HTTP directives set on the server: min/max/default CloudFront TTLs values cap Cache-control max-age, s-maxage and even no-cache,must-revalidate. That means, for example, setting minimum TTL = 1 on CloudFront will cache the response for 1 second even on Cache-Control: no-cache,must-revalidate.

Cache-Control directive contains information for CloudFront and also for the final client, so they have to be set anyway, and they have to be consistent with the CloudFront settings.


Caching is a powerful yet tricky technique to improve application performance. We should approach it wisely and analyze what and how to cache our applications’ data. Using powerful tools like Amazon CloudFront and fastify make it a little easier to implement and to be in control of the outcome.

Share Me

Related Reading


Don’t miss a beat

Get all the latest NearForm news, from technology to design. Sign up for our newsletter.

Follow us for more information on this and other topics.