There is nothing more frustrating than pushing a one-line fix only to wait 20 minutes for the CI pipeline to tell you that a random flaky test failed. In my experience, as a project grows, the test suite is almost always the primary bottleneck. When developers start skipping tests locally because they’re ‘too slow,’ you’ve officially lost the battle for code quality.

Learning how to reduce CI/CD build time for tests isn’t just about buying larger runners; it’s about optimizing how your code is executed and how your environment is managed. I’ve spent the last year auditing pipelines for several projects, and I’ve found that most teams are leaving significant performance gains on the table.

1. Implement Test Sharding (Parallelization)

The most immediate way to drop your build time is to stop running tests sequentially. If you have 1,000 tests and one runner, you’re capped by the speed of a single CPU. By sharding, you split those tests across multiple parallel machines.

For example, if you use GitHub Actions, you can use a strategy matrix to split your suite into four parallel jobs. I highly recommend sharding tests in GitHub Actions to distribute the load evenly. Instead of one 20-minute job, you get four 5-minute jobs running concurrently.

2. Cache Dependencies Aggressively

Re-installing 500MB of node_modules or Python packages on every single commit is a waste of time. Use your CI provider’s caching mechanism to persist the dependency folder across builds.

# Example GitHub Actions caching for npm
- uses: actions/cache@v3
  with:
    path: ~/.npm
    key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      ${{ runner.os }}-node-

3. Only Run Affected Tests (Incremental Testing)

Why run the entire end-to-end suite when you only changed a CSS file in the footer? Tools like Nx or Turborepo allow you to analyze the dependency graph and run only the tests affected by the current change. In my current setup, this has reduced PR check times from 15 minutes to under 3 minutes for small changes.

4. Optimize Your Docker Image Layers

If your CI starts by building a Docker image, a poorly optimized Dockerfile can add minutes to every run. Ensure you are ordering your commands from least-frequently changed to most-frequently changed. Put your COPY package.json and RUN npm install before you COPY . .. This ensures that the dependency layer is cached unless the package file actually changes.

5. Move Heavy Tests to a Different Stage

Not all tests are created equal. Unit tests should be lightning-fast, while integration and E2E tests are inherently slow. I suggest adopting continuous testing in DevOps best practices by splitting your pipeline into stages: ‘Fast’ (Unit/Lint), ‘Medium’ (Integration), and ‘Slow’ (E2E/Smoke).

As shown in the diagram below, failing fast in the first stage prevents the expensive slow tests from even starting if the basics are broken.

CI/CD Pipeline stages diagram showing Fast, Medium, and Slow test tiers
CI/CD Pipeline stages diagram showing Fast, Medium, and Slow test tiers

6. Use a Faster Test Runner

If you’re still using an aging test runner, it might be time to switch. Switching from Jest to Vitest in a Vite-based project, for example, often results in a 2x-5x speed increase in test execution due to better ESM handling and faster startup times.

7. Database State Management

The biggest time-sink in integration tests is often the database setup. Instead of running migrations on every test file, use a global setup script to migrate the database once per build. For faster resets, I’ve found that using database transactions (rolling back after each test) is significantly faster than truncating tables.

8. Avoid ‘Sleep’ and Use ‘Wait-For’

I see this in E2E tests constantly: await sleep(5000). If the element appears in 1 second, you’ve just wasted 4 seconds. Replace every single hard-coded sleep with a dynamic poller or a ‘wait-for-element’ assertion. Across a suite of 100 tests, removing these ‘safety buffers’ can shave minutes off your build.

9. Use Ramdisk for Temp Files

If your tests involve heavy I/O (reading/writing files), the disk speed of your CI runner can be a bottleneck. If your runner allows it, move your /tmp or test output directories to a ramdisk (tmpfs). In my experience, this significantly speeds up tests that generate large amounts of temporary artifacts.

10. Profile and Prune Your Test Suite

Some tests just get old and bloated. Periodically run your tests with a profiler to find the ‘long poles’—the 1% of tests that take 50% of the time. Often, these are tests with inefficient loops or unnecessary API calls. If a test provides low value but takes 2 minutes to run, it’s time to refactor it or move it to a nightly build.

Common Mistakes When Optimizing CI

Measuring Success

Don’t guess—measure. I recommend tracking your P95 Build Duration. This tells you the experience of the average developer. Use a tool like Datadog CI Visibility or simply export your GitHub Action durations to a CSV to track the trend over time. If your P95 is trending up, it’s time to revisit the list above.