Skip to main content
UI CoveragePremium Solution

Results API

The @cypress/extract-cloud-results module provides the getUICoverageResults utility to programmatically fetch UI Coverage results for a run in a CI environment. This allows you to determine if test coverage meets your requirements before merging code changes.

Supported CI Providers​

The utility supports the following CI providers. Refer to the linked guides for setup details:

For other CI providers, contact Cypress Support to request support.

Installation​

Install the @cypress/extract-cloud-results module in your install step in CI.

npm install --force https://cdn.cypress.io/extract-cloud-results/v1/extract-cloud-results.tgz
caution

Do not check this module in as a dependency. We recommend you install it separately outside of your normal module installation. Use --force to get the latest version.

If you check this in as a dependency, your installation will fail when we update the package.

Usage​

1. Get the UI Coverage Results​

Write a script to fetch UI Coverage results and assert test coverage criteria. This script will be executed in CI.

scripts/verifyUICoverageResults.js
const { getUICoverageResults } = require('@cypress/extract-cloud-results')

getUICoverageResults({
projectId: process.env.CYPRESS_PROJECT_ID, // Optional if set from env
recordKey: process.env.CYPRESS_RECORD_KEY, // Optional if set from env
runTags: [process.env.RUN_TAGS], // Required if recording multiple runs
}).then((results) => {
const { runNumber, uiCoverageReportUrl, summary, views } = results

console.log(
`Received ${summary.isPartialReport ? 'partial' : ''} results for run #${runNumber}.`
)
console.log(`See full report at ${uiCoverageReportUrl}.`)

// Verify overall coverage
if (summary.coverage < 80) {
throw new Error(
`Project coverage is ${summary.coverage}, below the minimum threshold of 80%.`
)
}

const criticalViews = [/login/, /checkout/]

// Verify critical view coverage
views.forEach((view) => {
const { displayName, coverage, uiCoverageReportUrl } = view

if (
criticalViews.some((pattern) => pattern.test(displayName)) &&
coverage < 95
) {
throw new Error(
`Critical view "${displayName}" coverage is ${coverage}%, below the required 95%. See: ${uiCoverageReportUrl}`
)
}
})

console.log('UI Coverage is above minimum thresholds.')
})

getUICoverageResults arguments​

getUICoverageResults accpets the following arguments:

getUICoverageResults({
// The Cypress project ID.
// Optional if the CYPRESS_PROJECT_ID env is set
projectId: string
// The project's record key.
// Optional if the CYPRESS_RECORD_KEY env is set
recordKey: string
// The run tags associated with the run.
// Required IF you are recording multiple Cypress runs from a single CI build.
// Pass the run tags you used when recording in each run
runTags: string[]
})

Result Details​

The getUICoverageResults utility returns the following data:

{
// The run number of the identified build.
runNumber: number
// The run url for the identified build.
runUrl: 'https://cloud.cypress.io/projects/:project_id/runs/:run_number'
// The status of the identified build.
runStatus: 'passed' | 'failed' | 'errored' | 'timedOut' | 'cancelled' | 'noTests'
// The url that links to UI Coverage report for the identified build.
uiCoverageReportUrl: 'https://cloud.cypress.io/[...]'
summary: {
// Indicates whether a complete UI Coverage report was generated.
// For example, if a run was cancelled and the report expected to run
// for 20 specs, but only 10 ran, this would result in a partial report.
isPartialReport: boolean
// The report coverage from 0-100 with 2 decimal precision (e.g 92.45).
coverage: number
// The number of views tested and analyzed.
viewCount: number
// The number of interactive elements that were tested.
testedElementsCount:number
// The number of interactive elements that were not tested.
untestedElementsCount: number
}
// The list of tested views and the coverage of each page.
views: [{
// The sanatized URL pattern shown in the report.
displayName: string
// The view coverage from 0-100 with 2 decimal precision (e.g 92.45).
coverage: number
// The number of interactive elements that were tested on this view.
testedElementsCount:number
// The number of interactive elements that were not tested on this view.
untestedElementsCount: number
// The url that links the report for this view.
uiCoverageReportUrl: 'https://cloud.cypress.io/[...]'
}]
}

2. Add to CI Workflow​

In your CI workflow that runs your Cypress tests,

  1. Update your install job to install the @cypress/extract-cloud-results module.
  2. Pass in the necessary arguments to getUICoverageResults.
  3. Add a new step to the job that runs your Cypress tests to verify the UI Coverage results.
info

If you record multiple runs in a single CI build, you must record these runs using the --tag parameter and then call getUICoverageResults with the runTags argument for each run.

This is necessary to identify each unique run and return a corresponding set of results. The tags are how each run is uniquely identified.

Example

  • Let's imagine that within a single CI build you call cypress run --record multiple times because you're running one set of tests against a staging environment, followed by a production environment.
  • In this scenario, you pass a different --tag to each cypress run
    • cypress run --record --tag staging
    • cypress run --record --tag production
  • When calling getUICoverageResults you would then pass these same tags to get the unique set of results for each run
    • getUICoverageResults({ runTags: ['staging']})
    • getUICoverageResults({ runTags: ['production']})

Example Job Workflow Update:​

test_cypress.yaml
name: My Workflow
on: push

env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}

jobs:
run-cypress:
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: install
run: npm install
- name: Run
run: npx cypress run --record
+ - name: Verify UI Coverage Results
+ run: |
+ npm install --force https://cdn.cypress.io/extract-cloud-results/v1/extract-cloud-results.tgz
+ node ./scripts/verifyUICoverageResults.js

Required CI environment variables​

The @cypress/extract-cloud-results helper cross-references some environment variables from where it is executed with ones that were present when a Cypress Cloud run was recorded. This allows for automatically detecting the correct Cloud run when the Results API is invoked from the same CI context as a given run (as is the case in the above examples).

For more complex setups, or for local iteration on your Results API handler code, it can be useful to know what variables Cypress is looking for so that you can make sure they are passed through where they are needed.

Likewise, if you want to use the Results API locally to pull the data for a specific run (within the last 7 days), you can set these variables locally to match what was present in CI.

Local development example​

If you executed a run in GitHub Actions and it was recorded to Cypress Cloud, you would set these 4 environment variables to replicate the context of that run locally and execute your local handler script. This is a great way to iterate on your script and verify everything is working as expected, without having to integrate anything in CI. It's also useful for debugging.

CYPRESS_PROJECT_ID=AAA
CYPRESS_RECORD_KEY=BBB
GITHUB_ACTIONS=true
GITHUB_RUN_ID=111
GITHUB_RUN_ATTEMPT=0
node verifyAccessibilityResults.js

The Results API will then look for the Cypress Cloud run that matches this run ID. If there is more than one Cypress Cloud run found for that GitHub Actions Run, you can pass run tags to narrow down to one run's report.

Supported CI Provider Overview​

Each CI provider has a unique combination of components, patterns, and environment variables that must be interpreted by this module.

GitHub Actions​

Reference: https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions

Essential environment variables​

  • GITHUB_ACTIONS - Presence identifies the environment as a GitHub Actions environment.
  • GITHUB_RUN_ID - Value uniquely identifies a GitHub Actions workflow instance. Value does not change as jobs in the workflow are re-executed.
  • GITHUB_RUN_ATTEMPT - Value identifies the workflow instance's attempt index. Value is incremented each time jobs are re-executed.

Full environment variable reference: https://docs.github.com/en/actions/learn-github-actions/variables#default-environment-variables

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same workflow.
  2. The module script is always executed after the run to validate has been created. This can be achieved by either:
    1. Executing the module script in a separate job that is dependent upon the job that records the run (using the needs: [job-name] option in the config), or
    2. Executing the module script in serial with the cypress recording in the same job.

GitLab Pipelines​

Reference: https://docs.gitlab.com/ee/ci/pipelines/

Essential environment variables​

  • GITLAB_CI - Presence identifies the environment as a GitLab CI environment
  • CI_PIPELINE_ID - Value uniquely identifies a GitLab pipeline workflow. This value does not change as jobs in the pipeline are retried.
  • CI_JOB_NAME - Value uniquely identifies a single job name within a pipeline. Ex. run-e2e
  • CI_JOB_ID - Value uniquely identifies an execution instance of a job. This value will change each time a job is executed/re-executed.

Full environment variable reference: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same pipeline.
  2. The module script is always executed after the run to validate has been created. This can be achieved by:
    1. Executing the module script in a separate job that is dependent upon the job that records the run (using the needs: [job-name] option in the config), or
    2. Executing the module script in a separate job that is executed in a lower stage than the job that records the Cypress run, or
    3. Executing the module script in serial with the cypress recording in the same job.

Jenkins​

Reference: https://www.jenkins.io/doc/

Jenkins is heavily customizable through the usage of plugins, which limits the amount of assumptions we can make about available environment variables and overall behavior.

We have implemented Jenkins support within this module using the broadest set of available default values. For the purposes of this documentation, though, we will discuss terms related to Jenkins Pipeline support: https://www.jenkins.io/doc/book/pipeline/getting-started/

Essential terms​

Essential environment variables​

  • JENKINS_HOME - Presence identifies the environment as a Jenkins environment
  • BUILD_URL - Value uniquely identifies a Jenkins job execution, including name and id characteristics.

Full environment variable reference: https://www.jenkins.io/doc/book/pipeline/jenkinsfile/#using-environment-variables

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same job.
  2. The module script is always executed after the run to validate has been created. This can be achieved by executing the module script in serial with the cypress recording in the same job.

Azure​

Reference: https://learn.microsoft.com/en-us/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops

Note: Cypress v13.13.1 is the earliest Cypress release that records the environment variables necessary for this module to identify runs in an Azure environment. Previous Cypress versions are not supported in Azure pipelines.

Essential environment variables​

  • TF_BUILD and AZURE_HTTP_USER_AGENT - Combined presence identifies the environment as a Azure pipeline environment.
  • SYSTEM_PLANID - Value uniquely identifies a pipeline run. Value does not change as jobs within the pipeline are retried from failure.
  • SYSTEM_JOBID - Value uniquely identifies a job execution. Value changes each time a job is retried from failure, in conjunction with the SYSTEM_JOBATTEMPT being incremented.
  • SYSTEM_JOBATTEMPT - Value identifies the pipelines shared attempt index. Value is incremented when jobs are retried from failure.

Full environment variable reference: https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same pipeline run (i.e. they share a SYSTEM_PLANID value).
  2. The module script is always executed after the run to validate has been created. This can be achieved by either:
    1. Executing the module script in a separate job that is dependent upon the job that records the run (using the dependsOn: [job-name] option in the config), or
    2. Executing the module script in serial with the Cypress recording in the same job.

CircleCI​

Reference: https://circleci.com/docs/about-circleci/

Note: Cypress v13.13.1 is the earliest Cypress release that records the environment variables necessary for this module to identify runs in an CircleCI environment. Previous Cypress versions are not supported in CircleCI pipelines.

Essential environment variables​

  • CIRCLECI - Presence identifies the environment as a CircleCI environment
  • CIRCLE_PIPELINE_ID - Value uniquely identifies a CircleCI pipeline, created on push or manually triggered through the UI. This value does not change as workflows within the pipeline are re-executed.
  • CIRCLE_WORKFLOW_ID - Value uniquely identifies an instance of a workflow's execution within a pipeline. This value will be updated upon each workflow execution; in other words, retrying a workflow from failure from the Circle UI will create a new workflow with a new CIRCLE_WORKFLOW_ID value available to the jobs executed within it.
  • CIRCLE_WORKFLOW_JOB_ID - Value uniquely identifies an execution instance of a named job within a workflow instance.

Full environment variable reference: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same pipeline and workflow.
  2. The module script is always executed after the run to validate has been created. This can be achieved by:
    1. Executing the module script in a separate job that is dependent upon the job that records the run (using the requires: [job-name] option in the config), or
    2. Executing the module script in serial with the cypress recording in the same job.

AWS CodeBuild​

Reference: https://docs.aws.amazon.com/codebuild/

Essential environment variables​

  • CODEBUILD_BUILD_ID - Presence identifies the environment as an AWS CodeBuild environment. Value uniquely identifies a build.

Full environment variable reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html

Prerequisites

  1. The run to validate and this module's validation script are being executed within the same build.
  2. The module script is always executed after the run to validate has been created. This can be achieved by executing the module script in serial with the cypress recording in the same build.

Drone​

Reference: https://docs.drone.io/pipeline/overview/

Essential environment variables​

  • DRONE - Presence identifies the environment as an Drone environment.
  • DRONE_BUILD_NUMBER - Value uniquely identifies a Drone build.

Full environment variable reference: https://docs.drone.io/pipeline/environment/reference/

Prerequisites​

  1. The run to validate and this module's validation script are being executed within the same build.
  2. The module script is always executed after the run to validate has been created. This can be achieved by executing the module script in serial with the cypress recording in the same build.

In order to iterate on your verification script and see everything working without putting code into your CI environment, it can be useful to simulate the CI context for a specific Cypress run locally. This can save a lot of time when getting started.