7 pytest Features and Plugins That Will Save You Tons of Time

7 pytest Features and Plugins That Will Save You Tons of Time

These pytest tips will make your life easier. Fail fast, stop after first failure, retry last failed, and fix flaky tests with pytest-rerunfailures.

pytest is a powerful testing framework.

When used correctly, it makes tests concise, easy to follow and maintain. What few people know is that pytest comes with a set a features that can speed up your development process significantly.

In this tutorial, we'll learn the best pytest features and plugins to make your workflow faster and seamless. They're simple, easy to use and you can start applying them right away.

By the end of this guide, you'll have learned:

Let's go!

How to stop on the first failure and fail fast

Running the full test suite of a big project may take a long time.

No matter if you’re executing them locally or on a CI server, it’s always frustrating to see a test failing after waiting patiently. On certain occasions, you might wish to abort the entire session after the first failure so you can fix the broken test immediately. Fortunately, pytest comes with a very convenient CLI option for that, the -x / --exitfirst.

$ pytest -x tests/

How to rerun only the last failed tests

When developing locally, you might prefer to run all the tests before pushing to your repo.

If you’re working on a small project with just a few tests, it’s OK to re-run all of them. However, if the full test suite takes minutes to run, you’ll probably want to execute only the ones that failed. pytest allows that via the --lf or --last-failed option. This way you can save precious time and iterate much quicker!

$ pytest --lf tests/

How to rerun the whole test session (starting with last failed tests first)

Analogous to the preceding command, it might be helpful to re-run everything. The only difference is that you want to start with the failed test first. And to accomplish that, use the --ff or --failed-first flag.

$ pytest --ff tests/

How to use pytest-rerunfailures to fix flaky tests and retry on failure

One of the most disheartening situations is to see all tests passing locally only to fail on the CI server.

Several reasons can cause failures like these, but mostly they’re a result of a “flaky” test. A “flaky” test is one that fails intermittently, in a non-deterministic manner. Usually, re-running them is enough to make them pass. The problem is, if you have a long running test suite, you need to re-trigger the CI step and wait several minutes. This is a great time sink and fortunately can be avoided.

So, to improve that, it is ideal that we can re-rerun the “flaky” test. By doing so, we can increase the chance of making it pass, avoiding the complete failure of the CI step.

The best pytest plugin for that is the pytest-rerunfailures. This plugin re-runs tests as much as we want, thus eliminating intermittent failures.

The simplest way to use it is to pass the --reruns option with the maximum number of times you’d like the tests to run.

$ pytest --reruns 5

If you know ahead of time that an individual test is flaky, you can just mark them as flaky with pytest.mark.flaky to be retried.

def test_flaky():
    assert get_resut() is True

How to display the local variables of a failed test

We’ve learned how essential it is to iterate faster and how it can save you precious time.

In the same fashion, it’s very crucial that we can pick up key hints that will help us debug the failed tests. By using the --showlocalsflag, or simply -l, we can see the value of all local variables in tracebacks.

$ pytest tests/test_variables.py -l               
================ test session starts ================
tests/test_variables.py FF                                                                                                                                                     [100%]

================ FAILURES ================
_____________________________________________________________________________ test_local_variables[name] _____________________________________________________________________________

key = 'name'

    @pytest.mark.parametrize("key", ["name", "age"])
    def test_local_variables(key):
        result = person_info()
>       assert key in result
E       AssertionError: assert 'name' in {'height': 180}

key        = 'name'
result     = {'height': 180}

tests/test_variables.py:11: AssertionError
_____________________________________________________________________________ test_local_variables[age] ______________________________________________________________________________

key = 'age'

    @pytest.mark.parametrize("key", ["name", "age"])
    def test_local_variables(key):
        result = person_info()
>       assert key in result
E       AssertionError: assert 'age' in {'height': 180}

key        = 'age'
result     = {'height': 180}

tests/test_variables.py:11: AssertionError
================ short test summary info ================
FAILED tests/test_variables.py::test_local_variables[name] - AssertionError: assert 'name' in {'height': 180}
FAILED tests/test_variables.py::test_local_variables[age] - AssertionError: assert 'age' in {'height': 180}
================ 2 failed in 0.05s ================

How to run only a subset of tests

Sometimes you need to run just a subset of tests.

One way of doing that is running all the test cases of an individual file. For example, you could do pytest test_functions.py. Although this is better than running everything, we can still improve it. By using the -k option, one can specify keyword expressions that pytest will use to select the tests to be executed.

# tests/test_variables.py
def test_asdict():

def test_astuple():

def test_aslist():

Say that you need to run the first two tests, you can pass a list of keywords separated by or:

$ pytest -k "asdict or astuple" tests/test_variables.py


$ pytest -k "asdict or astuple" tests/test_variables.py
==================================== test session starts ====================================

tests/test_variables.py ..                                                            [100%]

============================== 2 passed, 1 deselected in 0.02s ==============================

How to Run Tests in Parallel

The more tests a project has, the longer it takes to run all of them. This sounds an indisputable statement, but it’s commonly overlooked. Running an extensive test suite one test after the other is an incredible waste of time. The best way to speed up the execution is by parallelizing it and taking advantage of multiple CPUs.

Sadly, pytest doesn’t have similar feature, so we must fall back on plugins. The best pytest plugin for that is pytest-xdist.

To send your tests to multiple CPUs, use the n or --numprocesses option.

$ pytest -n NUMCPUS

If you don’t know how many CPUs you have available, you can tell pytest-xdist to run the tests on all available CPUs with the auto value.

$ pytest -n auto


A large test suite can bring a lot of assurance to a project but it also comes with a cost. Long running testing sessions can be eat up a lot of development time and make iterations slower. By leveraging pytest features, and its plugin ecosystems, it’s possible to speed up the development process dramatically. In this tutorial, we looked at 7 tips we can adopt to improve our lives and waste less time executing tests.

Other posts you may like:

This post was originally published at https://miguendes.me