Performance Testing for Consent

Feel free to explore our blog on how we optimize Transcend Consent Management for performance to see why we believe Transcend Consent Management will keep your website nearly as performant (or even faster!) as your website is without any consent tool.

The blog demonstrates some metrics we've recorded historically; however, each company's website may use different external scripts, have users in different regions, etc., which can change how Transcend Consent Management will affect a specific site.

This document is meant to give you the power to test Transcend Consent's performance on your website specifically, using a publically available tool named Google Lighthouse.

Google Lighthouse is our recommended tool for testing webpage performance for a few reasons:

  • It has an API that lets us simulate websites both with and without airgap.js running giving us a highly accurate view of the effects of Airgap on a website's performance
  • It has a long history of usage, and is the underlying technology behind many other performance testing services
  • It is publically available and free to use
  • It has a nice tool for running on CI (https://github.com/GoogleChrome/lighthouse-ci), making it a great tool to run in automation to minimize some variability.
  • It has throttling control to let you test out a variety of web experiences your users may have

You are welcome to use any other tooling you would like to test your sites performance, and we will gladly look at any metrics you see to ensure you are confident in your website's performance with Airgap enabled.

If you do not want to set up Lighthouse or a CI system that can run it reproducibly, please let us know, and we can generate a report for you using our already-existing infrastructure at no cost. If that's the case, you can consider the rest of this document an explanation of what the numbers in your report will mean.

Here is a sample automated report

The first step is to have a website live that uses your Airgap bundle. For testing purposes, this is often done on a staging site, or on a production site hidden behind a feature flag (potentially through a tag manager).

Next, we want to run a series of tests where we load the page and view page load metrics. In half of these tests, we will load the page as it is, and in half the tests we will load the page with requests to transcend-cdn.com blocked, meaning Airgap will not load, and you will be testing the site as though there were no Airgap bundle running. Even if you have something like feature flags to turn airgap.js on/off, it's still recommended to do this approach to minimize differences that could occur when using your feature flag tool.

We are going to setup this test on GitHub Actions, as CI will give us a fairly consistent environment to run our tests on. As Lighthouse actually loads your website in each run in a headless browser, there is some variance in the results.

The first step is to a file named lighthouserc.json that contains the configuration for running Lighthouse CI.

{
  "ci": {
    "collect": {
      "settings": {
        "configPath": "./lighthouse/lighthouse-config.js"
      }
    }
  }
}

Next, we create the referenced config file that determines how lighthouse itself should operate named lighthouse-config.js:

module.exports = {
  extends: 'lighthouse:default',
  settings: {
    emulatedFormFactor: 'desktop',
    onlyCategories: ['performance'],
    throttlingMethod: 'provided',
    // You can find audit names here:
    // https://github.com/GoogleChrome/lighthouse/blob/eba2a4d19c5786dc37e993858ff4b663181f81e5/lighthouse-core/config/default-config.js#L174
    onlyAudits: [
      'metrics/first-contentful-paint',
      'metrics/first-meaningful-paint',
      'metrics/speed-index',
      'critical-request-chains',
    ],
    blockedUrlPatterns:
      process.env.BLOCK_TRANSCEND === 'true' ? ['transcend-cdn.com'] : [],
  },
};

Note that we specify that when the BLOCK_TRANSCEND environment variable is set to true, Transcend's CDN is blocked entirely.

Finally, Create the CI job in .github/workflows/airgap_perf_testing.yml:

name: Airgap perf testing

on:
  workflow_dispatch:
    inputs:
      url:
        required: true
        type: string
        description: The URL that uses airgap.js to test against

env:
  NODE_OPTIONS: --max_old_space_size=7168
  GITHUB_USERNAME: transcend-bot

jobs:
  test:
    runs-on: ubuntu-22.04
    timeout-minutes: 10
    name: 'Perf test (block airgap: ${{ matrix.block_transcend }})'
    strategy:
      fail-fast: false
      matrix:
        block_transcend: [false, true]
    steps:
      - uses: actions/checkout@v3
        with:
          token: '${{ secrets.GITHUB_TOKEN }}'
          ref: '${{ inputs.sha || github.event.pull_request.head.sha || github.sha }}'
      - uses: actions/setup-node@v3.1.1
        with:
          node-version: '14.17.3'
      - name: Run Lighthouse on URLs with lighthouserc
        uses: treosh/lighthouse-ci-action@v9
        id: lighthouse
        env:
          BLOCK_TRANSCEND: ${{ matrix.block_transcend }}
        with:
          urls: ${{ inputs.url }}
          uploadArtifacts: true
          temporaryPublicStorage: true
          runs: 10
          configPath: '${{ github.workspace }}/lighthouse/lighthouserc.json'
      - name: Print first contentful paint metrics
        run: |
          echo "All results:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq '.audits."first-contentful-paint".displayValue'

          echo "Average:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq -r '.audits."first-contentful-paint".numericValue' \
            | jq --slurp '. | add / length / 10 | floor | . / 100 | tostring | . + " s"'
      - name: Print overall performance scores
        run: |
          echo "All results:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq '.categories.performance.score * 100'

          echo "Average:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq '.categories.performance.score * 100' \
            | jq --slurp '. | add / length'
      - name: Print airgap.js bundle download speeds
        if: matrix.block_transcend
        run: |
          echo "All results:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq -r '.audits."critical-request-chains".details.chains[].children[] | select(.request.url | endswith("airgap.js")) | .request | (.endTime - .startTime) * 1000'

          echo "Average:"
          cat ${{ steps.lighthouse.outputs.resultsPath }}/lhr-*.json \
            | jq -r '.audits."critical-request-chains".details.chains[].children[] | select(.request.url | endswith("airgap.js")) | .request | (.endTime - .startTime) * 1000' \
            | jq --slurp '. | add / length'

This will run the lighthouse tests ten times and print out some averages across the runs.

Running the above workflow is good for a quick summary, but does not find statistically significant results for you (unless you increase the number of runs quite a bit and perform the tests yourself). But there are a few improvements we have made in our version of this workflow that we can run for you:

  • We run the lighthouse tests in parallel across dozens of different executors to reduce the time to generate a report and to reduce noise from CI executors that perform at variable levels
  • We perform a Two-sample Kolmogorov–Smirnov test to test if the two distributions (with and without airgap.js enabled) are likely from the same distribution or not.

Here's an example of what this ends up looking like, where there is a single job after all the lighthouse jobs run in parallel that parses and analyzes the results:

Example of tests running in parallel

Please reach out if you would like us to run one of these tests for you.

Here is a full sample report we can produce for you, with a live version found here.

Sample Report with and without Airgap running

In this example, we see that in general the performance graphs follow similar distributions with and without airgap.js enabled, and this observation is backed up by our statistical tests, which cannot claim that the distributions differ. This is great news, as it means that your users would likely not notice any difference with Airgap being used on your site or not, though it is important to run these tests against your own website during testing to examine if you may have different results.