Skip to content

Optimize run time on Behat on GitHub Actions

Nobody likes to wait period, but nobody likes to wait especially for a pipeline to finish so they can see if their behat test failed.

GitHub Action’s run time is always a little hard to predict because unless you run your workflows on dedicated (self-hosted) instances you are effectively at the mercy of GitHub VM distribution.

But there are still something’s we can do to speed up the process of running our behat suite.

  • Parallelize the suite
  • Run only some suites (based on path for example)
  • Use a DB image preloaded with base data.

Running the test in parallel is the first thing you can do to optimize the run time, however, it can be complicated if for some reason your tests “depend” on previous test/steps.

After that has been taken care of you can use any wrapper that allows you to run as many processes as needed.

The second option is to run only the test that you need. This is a very nice especially if you use dorny/paths-filter that can help you identify which files were changed and run only the necessary suite.

The third option is the one where we saw the biggest impact. In our PHP application, the database needed to be "migrated" to have the schema and minimum data so we could run test suite.

If you want to take advantage of this improvement here is what you need to do:

  • Create a Database Docker Image from your DB engine (for example MariaDB)
  • In that same image, load the base DB dump needed to run the base test
  • Use data image in your DB container on Github Actions.

A reference on how to create the Docker Image can be found here: lindycoder/prepopulated-mysql-container-example

That is all.