Headless Ecommerce

Jamstack for eCommerce at Scale

Jamstack emerged as a novel design philosophy and architecture which allows the web to be faster, more secure and easier to scale. It maximizes productivity by building upon many of the tools and workflows which developers love. The core principles of Jamstack include pre-rendering and decoupling, which enables sites and applications to be delivered faster and with greater confidence.

In spite of all its advantages, applying Jamstack to eCommerce websites with large catalogs and frequent updates involves dealing with a wide range of challenges. If you’re running an eCommerce site on a backend platform such as Salesforce Commerce Commerce Cloud, Magento, or SAP-Hybris—you’re probably already facing some of them. 

In this article, we cover the key challenges in building large-scale eCommerce Jamstack sites and how Layer0 can help you tackle these problems.

For the full version of Layer0 CTO Ishan Anand’s presentation at Jamstack Conference 2020, go to the official Layer0 YouTube channel.

What is Layer0?

Layer0 brings the advantages of Jamstack to eCommerce, accelerating site speeds and simplifying development workflows. By streaming cached data from the edge into the browser, before it is requested, Layer0 is able to keep websites 5 seconds ahead odf shoppers’ taps. Sharper Image, REVOLVE, and Shoe Carnival are just a few examples of sites leveraging the Layer0 Jamstack platform to increase developer productivity and deliver their sub-second websites.

What are the challenges of using Jamstack for eCommerce at scale?

Using Jamstack and headless for eCommerce, especially on sites with large catalogs, frequent updates, or those on monolithic eCommerce platforms, is typically associated with dealing with the following challenges:

  • Long build times
  • Frequent updates
  • Tricky site migrations
  • Dynamic data
  • Personalization
  • A/B testing
  • Incomplete APIs
  • Data Pipeline Architecture
  • Customizations lost by APIs
  • Database connection limits
  • Team capability
  • CMS integration
  • Styles embedded in CMS content
  • Backoffice workflow integration

Build time friction and other challenges at scale

Jamstack has high traffic scalability built in. But the build step introduces a new scaling dimension, as typical static rendering happens during the build. As you expand your website, or perform more frequent changes, you exit the sweet spot where Jamstack is really fast and agile. The result is build time friction. It is easy to sweep the problem under the rug if you’re working on a small site, but that is not the case for the typical eCommerce site. 

Another important thing to remember is that sites are built as much by non-developers as they are by developers. Because content, marketing and merchandising constantly change things, so build time friction can quickly become a problem for the entire org. 

All of this to say that “at scale” happens more than you would think, and it’s not limited to eCommerce. Take a look at this comparison between retailer and news websites. For eCommerce sites the number of SKUs is a proxy for the number of pages.

eCommerce sites with many products (SKUs)
Publishers with many articles

While you might think that only sites like Amazon have to deal with millions of SKUs, this is not true. Car parts websites are a great example—they host millions of products based on the year/model/make/vehicle search criteria (YMMV). For example, TruPar.com sells forklift parts exclusively and they have 8M SKUs.

Thankfully, there are a few static and dynamic rendering techniques which help deal with Jamstack at scale problems. 

“Static” techniques

  • Optimizing build times
  • Client-side rendering
  • Incremental static (re)generation

“Dynamic” techniques

  • Serverless server-side rendering + CDN
  • Parallel static rendering

“Mixed” rendering techniques

  • Choosing the best rendering technique for each class of pages
  • Choosing a framework and platform that let you mix techniques as needed

In the following paragraphs we will discuss what these techniques really mean.

Static techniques

Optimizing build times

There are a couple of methods to optimize build times for dynamic Javascript pages. 

Incremental builds

With incremental builds you can save build artifacts and only regenerate what’s changed. If only a single page changed, you will regenerate that single page.

Parallel builds

The framework splits the build across multiple processes or threads. This is really helpful for image processing.

Alternate static site generators

Natively powered SSGs are an emerging method and have been found to report better build times. Examples include Hugo (Go) and Nift (C++). However, many natively written static site generators don’t work that well with Javascript-heavy websites. Toast, which is relatively new, is trying to tackle that. 

The caveat here is that framework and cloud provider support for parallel and incremental builds varies. Not all of them support it and those which do offer only limited support. 

Potential excess cost for pages with infrequent visits

There is also the issue of potential excess cost. If you have a large site with tens of thousands of SKUs or more, most of your traffic follows a power distribution and you spend extra compute time rebuilding pages that will never be visited. The more you update the site, the larger that cost will grow. Keep that in mind when thinking about some of these techniques. 

According to willit.build (a Gatsby build benchmark page which provides historical build times of sites built on Gatsby Cloud) build times for Contentful and WordPress sites are about 200ms per page, which means that for a site with 10k pages a full site build could take 25 minutes. Incremental builds can get you down to a few minutes, which shows the power of incremental builds. This technique can be really helpful as long as you don’t do full builds.

Client-side rendering

Also known as the app shell, or the SPA fallback model, client-side rendering is basically CDN routing. If your site hosts a million products, all these are routed by this CDN layer into the index.html, and become one static file which just contains an app shell and is client-side rendered. When that page is loaded by the browser, the client-site router will fetch and render the page content in the browser.

With client-side rendering you can effectively host an infinite number of pages, but there are some important considerations:

CSR may negatively impact SEO

The caveat with client-side rendering is it might hurt performance—because the page can't render until the Javascript loads. Starting May 2021, Google will rank websites based on three speed metrics CLS, LCP and FID, collectively called Core Web Vitals. Client-side rendering can actually negatively impact all of these, especially Cumulative Layout Shift. It’s not impossible, it’s just quite hard to get good CLS with the app shell model. To do so you basically need to create custom versions of the app shell for each type of page.

Client-side rendered pages can’t be read by (some) bots

Some bots cannot read client-side rendered content. Google claims their bots can render Javascript and interpret it, but we know most other bots cannot, including those of most social platforms, which for a lot of sites is a significant traffic source.

CSR requires support for rewrite and redirect rules

The third caveat in implementing CSR is that it requires your CDN provider’s support for rewrite and redirect rules, and some do it more elegantly than others. On AWS CloudFront, for example, you have to basically shoehorn this in through their 404 page support or use Lambda@Edge handlers.

Thankfully the leading Jamstack platforms Netlify, Vercel and Layer0 offer a fairly easy way to enable CSR. 

In Netlify you have a redirects file. With the 200 modifier, it’s a rewrite, but it's a hidden redirect that the user never sees. 

Netlify

Vercel offers rewrites support in vercel.json, it also integrates very tightly with Next.js.

Vercel

The CDN-as-JavaScript command shell in Layer0 offers Next.js rewrites and supports other frameworks.

Layer0

Incremental static generation

This technique was pioneered by Next.js and involves generating new static pages on demand in response to incoming traffic. The browser requests a new page that has not yet been visited and for every page—regardless of what the page is—the CDN will quickly return back a universal fallback page that only contains placeholder data and no content. 

While the fall back page is displayed, the page’s static build process runs in the background. When that build completes, the fallback page loads the static json data and displays the final page. From then on, future visits will get the statically built HTML.

You can see an example here

a Next.js tweet placeholder

When you visit https://static-tweet.now.sh/1346427855052353545 you’ll notice that if the tweet has never been rendered before, you’ll get a skeleton page. This happens only the first time you visit the page. If you refresh, you’ll get the static HTML, no matter what edge in the global network you are visiting. And every future visit will get the statically generated HTML page.

Actually, because it’s static HTML, even if Twitter disappears from the internet, you still have strong guarantees of its high availability, backed by redundant storage.

So, you can imagine a site that has no pages built out, and as traffic comes in, it’s gradually building static pages. 

Incremental static regeneration

There is a version of incremental static generation called incremental static regeneration, which is essentially the same process, but it involves updating an existing static page in response to traffic. So if the underlying data is changing, it’s re-running the build process, inspired by stale-while-revalidate, a popular yet not widely appreciated cache protocol. This will serve a stale version of the page, instead of the fallback, while it’s rebuilding the page, and then swap that for the new version once the build process finishes.

Incremental static regeneration:

  • Updates existing static pages in response to traffic,
  • Serves a stale version of the page instead of a fallback.

Incremental static regeneration has a minor impact on SEO and compatibility, especially on the first page. The fallback page is entirely CSR and actually has no data, so it’s not quite clear how bots will respond to it.  

Dynamic techniques 

In addition to static techniques, eCommerce websites can also benefit from dynamic techniques like:

  • Serverless server-side rendering + CDN
  • Parallel static rendering

Serverless server-side rendering + CDN

Using SSR in conjunction with the CDN allows you to generate pages on demand in response to traffic, which gives you a number of advantages. This technique is also more compatible with how traditional eCommerce platforms are made. It lets you support a large number of pages—you can dynamically generate these pages when needed—and ensures high compatibility with legacy platforms.

However, this technique is also a little controversial. The Jamstack community tends to be very dogmatic about what Jamstack is and asserts that Jamstack requires static generation. 

Serverless server-side rendering is effectively Jamstack-ish when 2 conditions are met:

1. Zero devops and servers to manage. Basically it’s serverless in which developers don’t have to manage scale-way. In fact, it’s the same serverless that a lot of Jamstack platforms use to power their APIs, which is basically saying you can use it to power HTML data as well as through SSR. 

2. HTML is served from the CDN. This is a really critical condition. After the first cache miss, the CDN served site is effectively as fast as a static generated Jamstack site. Please note that this requires proper cache management and is harder to do for multi-page sites.

Parallel static rendering / SSR preloading

Layer0 allows you to specify the set of URLs that should be pre-rendered and cached at the edge during deployment to ensure that users get a sub-second experience when accessing your site. 

Static pre-rendering involves sending requests to your application code and caching the result right after your site is deployed. In this way, you simply build your app to implement server-side rendering and get the speed benefits of a static site for some or all of your pages. This feature is especially useful for large, complex sites that have too many URLs to prerender without incurring exceptionally long build times.

SSR preloading is another technique used by Layer0 to accelerate page speeds. It is very similar to the regular SSR pipeline, but it is based on an analysis of the traffic logs after deployment. The high trafficked pages are basically pre-loaded in parallel to the deploy. We let the deploy happen instantaneously and asynchronously build the high traffic pages. In this way, you actually decouple deploy from build. So you get immediate deploys while also maximizing cache hits. 

Essentially, if there is a request for a page with high traffic levels, there’ll most likely be a cache hit. It’s the best way to get the best possible cache hits in this environment. 

Parallel static rendering allows you to:

  • Analyze logs for high traffic pages
  • Fetch and store HTML for high traffic pages asynchronously after deploy
  • Immediately deploy while maximizing cache hits
Static prerendering

Mixed rendering techniques

You don’t have to choose between static and dynamic rendering techniques. You can choose what’s right for each class of pages on your site. You might want to declare the “About us,” “Return Policy” or blog as static, and other pages like cart, product and categories as dynamic. We recommend that you choose a platform provider that lets you flexibly mix the techniques as needed, especially if you’re doing this at scale. 

Choose the best rendering technique for each class of pages, e.g.: declare some pages static (e.g. blog, about us, etc.), and other pages dynamic (e.g. cart, products, categories, etc.)

Choose a framework and platform provider that lets you flexibly mix techniques as needed

Schedule a 1-on-1 Demo

Schedule a consultative conversation now to learn how the Moovweb XDN can help you achieve sub-second page loads.

SCHEDULE A CALL NOW or SIGN UP & TEST MOOVWEB XDN

Jamstack at scale with Layer0

Today’s CDN’s cache images, JavaScript and CSS, but not JSON or HTML files, and that’s what’s holding up your page load times. Layer0 CDN-as-JavaScript makes it maintainable to cache that data at the edge even in a dynamic, severless, SSR environment. 

Jamstack takes the server out of the equation and effectively lets the CDN manage the traffic, which it can with ease regardless of traffic fluctuations. Layer0 does the same but in a different manner—instead of rendering at build we render on request, but cache each build at the edge so a build is no longer required after 1 build. 

Rendering each page at build is fine for smaller sites, but once you are larger build time becomes almost unbearable and the lack of customization / personalization or the workarounds to deliver these makes Jamstack that focuses on build time less relevant for large-scale database-driven websites like eCommerce and Travel.

CDN-as-JavaScript

Layer0 CDN-as-JavaScript gives you powerful edge control over cache keys, headers, cookies, and more and it also works with your code. It understands your code, your framework’s routing, and it can be emulated locally or in pre-production environments. 

Edge rules live in your code, just like in classic Jamstack, giving you complete control over the edge with live logs, versioning and 1-click rollbacks.

See the Layer0 Cookbook for some detailed examples of routing patterns on CDN-as-JavaScript.

Performance Monitor

To maximize cache hit rates it’s important to know what these rates really are in the first place, but this information is usually buried deep in your CDN’s access logs.  

Layer0 comes with performance monitoring built-in, making it easier to understand when page cache hits and misses happen, and exposing this information to the developer in a very friendly way. The Performance Monitor in Layer0 allows you to:

  • Understand traffic based on routes, not URL, because that’s how developers think about their app. It also tracks each deploy, so developers can pinpoint any regression.
  • Measure performance issues across the stack and loading scenarios (API, SSR, Edge, etc.)

Layer0 has also created a tool to diagnose whether the response is coming from the edge or the origin: DevTools. DevTools helps you determine whether the response is coming from the edge or the origin. The example below presents how it works on top of an app shell built with React Storefront, showing when a request hits. The response in this example is coming through Layer0 edge network. 

Layer0 DevTools allow you to diagnose whether responses come from the edge or origin

Understanding if a response comes from the edge or origin is critical for prefetching at scales, which is another thing Layer0 does for you.

Prefetching at scale

Prefetching is important for performance because it unlocks instant page speeds. Traditional page speed tests, like what you test with Lighthouse, are really focused on what happens after the customer clicks. But you can start to do a lot before the customer taps, and actually get zero latency and almost infinite bandwidth. 

Websites on Layer0 are blazingly fast because they use advanced predictive prefetching along with Layer0 CDN-as-JavaScript, which allows them to stay 5 seconds ahead of shoppers’ taps. This is done by streaming cached dynamic data from the edge to the users’ browsers before they click anything—based on what they are expected to click on next. In other words, your store can serve JSON data for the different products you are offering, their prices and information, in a fraction of the time.

Incremental migration

Layer0 offers you iterative (gradual, progressive) migration, which lets you iteratively migrate one section of the app at a time, following Martin Fowler’s strangler pattern. This way you incrementally “strangle” specific functionalities and replace them with new applications and services. It’s like moving a mountain stone by stone.

Incremental migration requires routed control at the CDN edge or origin. Here’s an example of how you do this on Layer0 using CDN-as-JavaScript. 

Personalization and segmentation

Incremental, gradual, progressive migration is important for large sites. But it’s not limited to personalization! It also covers language, geography, etc. And it makes sense because large sites usually work across geographie and it’s crucial for them to be able to customize the content to users as they visit the site. 

The general guideline is: if this content is below the fold, we actually recommend late-loading, client-side rendering. If it’s above the fold personalized content, then you really want it in a server rendered output. 

Above the fold personalized = add personalization to cache key

On Layer0 you can declare a custom cache key and personalize, for example, based on currency or behavior. You can customize the promotions and sorting order on the category pages—based on whether somebody is a frequent visitor or a new visitor—with just a few lines in CDN-as-JavaScript. 

A/B testing and the Layer0

A/B testing and personalization add a whole new layer of complexity in building Jamstack sites. Testing is very important for large sites and big organizations, where decisions are ROI driven and must be proven to improve conversion rates. 

In traditional Jamstack, however, the only option you have is client-side A/B testing that runs in the browser. The issue is that this can impact performance and nullify your testing in two ways. It can hurt the performance of your variants, which actually erases any kind of improvement. And sometimes A/B tests take effect after the eye has gone past the tested elements. You may have the A/B test in the header, and the user has already scanned past that header once the JavaScript runs and changes that element. 

The problems of client-side A/B testing

  • Usually the only option for static sites
  • Doesn’t run until JavaScript runs
  • Poor performance that possibly nullifies the test

Layer0 Edge Experiments remedy these problems by enabling A/B testing at the edge. On the XDN, new experiences are always native, cached and sub-second. This extends beyond A/B tests to any variant of your website.

Edge Experiments

Layer0 also comes with a powerful Edge Experiments engine built in. The module is part of CDN-as-JavaScript and is aware of all of your variants, ensuring each is cached separately at the edge. This gives you control over exactly which visitors see which variant.

Edge Experiments allow you to:

  • Route live traffic to any deployed branch at the edge of the network
  • Run A/B tests, canary deploys, or feature flags
  • Write routing rules based on probabilities or header values and even IP addresses

With Edge Experiments, you can easily split tests without affecting the performance of your site. Splits are executed at the edge through an easy-to-use yet powerful interface. Edge Experiments can be used for A/B and multivariate tests, canary deploys, blue-green tests, iterative migration off of a legacy website, personalization, and more. 

How our clients benefit from Layer0

Layer0 provides a frictionless transition to Jamstack and headless and offers a huge advantage for sites with large catalogs, frequent updates, or those running legacy eCommerce platforms. Shoe Carnival and Turnkey Vacation Rentals are two examples of developer teams at large sites that are using Jamstack and headless for eCommerce on Layer0.

Turnkey

TurnKey Vacation Rentals is a full-service vacation rental property management company for premium and luxury-level rental homes in top travel destinations across the country. Unlike sites like Airbnb, TurnKey offers only pre-vetted listings. It also handles management details centrally, using a standardized set of tech tools. 

Original setup

TurnKey was running an app inside of Docker on AWS Elastic Beanstalk and were looking for a solution that would provide them with  greater control and insight into performance. 

They considered a couple of Jamstack solutions, but wanted a platform that would support Next.js natively, like Layer0. The fact that with Layer0 they could avoid refactoring how their codebase and data pipeline worked was one of the deciding factors. 

Layer0 has helped Turnkey increase agility with a number of features listed below.  

Environments

In the past, Turnkey used a custom pipeline built inside of Jenkins, and the team was deploying from a trunk branch, never having complete confidence in what was getting ready to go out into production. 

With Layer0 the branches have individual environments, and the team at Turnkey can set up pristine environments—they don’t merge into the staging environment until they know something has passed QA. This removes the mental burden associated with QA.

Logs

Digging through server logs on Beanstalk can be a nightmare—you have to figure out exactly which logs you're looking for, which server they’re on, if they’re load balanced, etc. With Layer0 you can live stream logs directly from your build, which allows you to find the build you want to troubleshoot, press play and you watch the log.

Incremental migration

Turnkey had pages that were not on React/Next.js and were running on the old architecture. With Layer0 they could take what they’d already migrated and put that on the XDN and continue migrating incrementally.

Layer0 gave the team at Turnkey tools to focus on performance.

Shoe Carnival

Shoe Carnival Inc. is an American retailer of footwear. The company currently operates an online store alongside 419 brick-and-mortar stores throughout the midwest, south, and southeast regions of the US. 

Below are some of Layer0 features that the Shoe Carnival team found especially useful.

Flexibility

Shoe Carnival uses Salesforce Commerce Cloud, which is not really meant to run  headless frontends, like that of Shoe Carnival. So there was a lot of engineering and understanding on the backend side to be able to execute the data to the frontend. Those challenges could be solved because of the flexibility that the Layer0 backend sitting in-between Salesforce and the React frontend offered. The team at Shoe Carnival could freely build with React and ignore the limitations of Salesforce.

Time to production boost

Shoe Carnival’s time to production dramatically increased. The team can be separate from Salesforce development cycles, and make very quick changes in deployment.

Site speed

Speed to production is a huge benefit, but the site performance in general is hard to ignore as Shoe Carnival went from 5-6 second average page loads to sub-seconds. They can cache things at a very granular level and have the tools to make sure that what the customers are looking for is always available and up to date.

Incremental deployment 

Incremental deployment let the team deploy to production much faster than having to build a complete application to deploy it. 

As for the impact of the migration to Layer0,when Shoe Carnival tested the origin site against the headless site for conversions 50/50 at the CDN level, the headless always won, outperforming the origin site and improving speed and visibility. 

Summary

At Layer0, we believe Jamstack is the future of web development. Layer0 essentially brings the performance and simplicity benefits of Jamstack to front-end developer teams at large, dynamic eCommerce sites where traditional static techniques typically don't apply. We like to call it dynamic Jamstack. It makes SPA websites instant-loading and easier to develop for. 

Layer0 comes with an application-aware CDN-as-JavaScript, which can augment or even replace your current CDN and bring all the web security features you need to the edge. Layer0 also comes with a bunch of dev-focused technologies that make the entire process of developing, deploying, previewing, experimenting on, monitoring and running your headless frontend simple, including automated full-stack preview URLs, a serverless JavaScript backend for frontend, advanced cache monitoring and more. 

Layer0 is an all-in-one development platform that lets you:

  • Utilize Jamstack for eCommerce via both pre and just-in-time rendering
  • Enable zero latency networking via prefetching of data from your product catalog APIs
  • Configure edge natively in your app (CDN-as-JavaScript)
  • Run edge rules locally and in pre-prod
  • Create preview URLs from GitHub, GitLab, or Bitbucket with every new branch and push
  • Run splits at the edge for performant A/B tests, canary deploys, and personalization
  • Serverless JavaScript that is much easier and more reliable than AWS Lambda

Want to take your site to the next level?
Join the Layer0 Newsletter for the latest updates on performance.
Thank you! You've been added.
Oops! Something went wrong.
Thanks for registering, you're being redirected!
Oops! Something went wrong.

Don't wait another second. Go instant.

Get started in seconds