"
This article is part of in the series

Writing tests is a key part of Python development since tests enable programmers to check the validity of their code. They are considered one of the most effective ways to prove that the written code is functioning as necessary and reducing the chances of future changes breaking the functionality.

And this is where Pytest comes in. The testing framework makes it easy for programmers to write scalable test cases for UI and databases, though Pytest is primarily used to write tests for APIs.  

In this comprehensive guide, we walk you through installing Pytest, its powerful advantages, and using it to write tests on your machine.

Installing Pytest

Like most Python packages, you can install Pytest from PyPI and install it on a virtual environment with pip. If you're on a Windows computer, run the following on Windows PowerShell:

PS> python -m venv venv
PS> .\venv\Scripts\activate
(venv) PS> python -m pip install Pytest

 

On the other hand, if you're running macOS or have a Linux machine, run this on the terminal:

$ python -m venv venv
$ source venv/bin/activate
(venv) $ python -m pip install Pytest

 

Advantages of Pytest

If you're familiar with writing unit tests in Python, you may have used the unittest Python module which is built into Python. The module offers a good foundation for a programmer to build a test suite; however, it is not without shortcomings. 

Numerous third-party testing frameworks attempt to overcome the disadvantages of unittest, but Pytest stands out as the most effective and hence is the most popular solution. Pytest has several features and can be coupled with a rich assortment of plugins to make testing easier.

More specifically, with Pytest, you can expect to perform common tasks without writing much code and complete advanced tasks faster using the built-in time-saving commands. 

Further, Pytest can run your existing tests without needing an additional plugin. Let's discuss the advantages in more detail: 

#1 Less Repeated Code 

The majority of functional tests use the Arrange-Act-Assert model, wherein:

  1. The conditions for the test are arranged,
  2. A function or method performs some action,
  3. The code asserts whether a specific end condition is true

Testing frameworks usually work on the assertions, allowing them to offer some information when an assertion fails. For instance, the unittest module offers several assertion utilities, but even the smallest tests tend to require a lot of repeated code.

If you were to write a test suite to check if unittest is working correctly, ideally, the suite would have one test that always passes and one that always fails. However, to write this test suite, you would need to do the following:

  1. Import TestCase class from unittest.
  2. Write a subclass of TestCase, let's call it "TryTest."
  3. Write a TryTest method for every test.
  4. Use a self.assert* method from unittest.TestCase for assertions.

These four tasks are the minimum you need to do to create any test. Writing tests this way is inefficient since it involves writing the same code several times. 

With Pytest, the workflow is much simpler since you are free to use normal functions and also the built-in assert keyword. The same test would look like this when written using Pytest: 

# test_with_Pytest.py

def test_always_passes():
    assert True

def test_always_fails():
    assert False

 

It's that simple – no need to import any modules or use any classes. Writing a function with the prefix "text_" is all it takes. Since Pytest enables you to use the assert keyword, there is no need to read about all the self.assert* methods present in unittest. 

Pytest will test any expression that you expect to evaluate to True. Besides your test suite having less repeated code, it also becomes much more detailed and easy to read.

#2 Appealing Output

One of the nice things about Pytest is that you can run its command from your project's top-level folder. As you may have figured out by now, the Pytest module produces test results differently than unittest. The report will show you:

  1. The system state, including details of the Python version, Pytest, and other plugins;
  2. The directory in which you can search for the tests and configuration; and
  3. The number of tests discovered by the runner.

These things are presented in the first section of the report, and in the next section, it displays the status of every test next to the name of the test. If a dot appears, it means the test is passed. If an F appears, the test failed, and if an E appears, the test threw an unexpected exception.

A detailed breakdown always accompanies failed tests, and this additional output makes debugging more manageable. In the final sections of the report, there is an overall status of the test suite.

#3 Simple to Learn

If you're familiar with using the assert keyword, there is nothing new for you to learn to use Pytest. Here are some examples of assertion to help you understand the different ways you can test with Pytest:

# test_examples_of_assertion.py

def test_uppercase():
    assert "example text".upper() == "EXAMPLE TEST"

def test_reverseList():
    assert list(reverseList([5, 6, 7, 8])) == [8, 7, 6, 5]

def test_some_primes():
    assert 37 in {
        num
        for num in range(2, 50)
        if not any(num % div == 0 for div in range(2, num))
    }

 

The tests in the suite above look like regular Python functions, making Pytest easy to learn and removing the need for programmers to learn any new constructs.

Note how the tests are short and self-contained – this is the trend with Pytest tests. You can expect to see lengthy function names with little happening inside them. This way, Pytest keeps the tests isolated; if something breaks, the programmers know where to look for the problem.

#4 More Manageable State and Dependencies 

Most tests you will write using Pytest will depend on the data types or test doubles mocking the objects your code will likely encounter, such as JSON files and dictionaries. 

On the other hand, when using unittest, programmers tend to extract the dependencies into the .setUp() and .tearDown() methods. This way, every test in the class can use the dependencies. 

While using these methods is not wrong, as the test classes become more expansive, the programmer may end up making the test's dependence entirely implicit. In other words, when you look at the isolated tests, you may not see that the test depends on something else.

These implicit dependencies can tangle the tests and make it challenging to make sense of them. The idea behind using tests is to make the code more understandable, making the use of these methods counterproductive in the long run.

You do not have to worry about this when using Pytest since the fixtures lead you to explicit dependency declarations which are reusable.  

Fixtures in Pytest are functions that enable you to create data, test doubles, and initialize system states for the test suite. If you decide to use a fixture, you must explicitly use the following fixture function as an argument to the test function:

# fixture_example.py

import Pytest

@Pytest.fixture
def example_fixture():
    return 1

def testing_using_fixture(example_fixture):
    assert example_fixture == 1

 

Using the fixture function this way keeps the dependencies up front. When you glance at the test function, it is clear that it depends on a fixture right away. You do not need to check the file for fixture definitions.

---

Note: Putting your tests in a separate folder called tests at the root of your project folder is considered best practice.

---

Pytest provides a lot of flexibility when it comes to fixtures since fixtures can use other fixtures by simple, explicit declaration as dependencies. But due to this, fixtures can become modular as you continue to use them. In other words, you will need to manage your dependencies carefully as your test suite becomes larger.

We will discuss fixtures in more detail later in this post.

#5 Filtering Tests is Easy 

Test suites are bound to grow in size, and programmers find themselves wanting to run a handful of tests on a feature and save using the full suite for a later time. With Pytest, there are a few methods of doing this:

  • Directory scoping: Python runs only the tests in (or under) the current directory by default. Using this feature to run only the required tests is called directory scoping. 
  • Name-based filtering: Pytest allows programmers to run only the tests whose fully qualified names are the same as a specific expression. Name-based filtering is easy to accomplish with the -k parameter.
  • Test categorization: Including or excluding tests from defined categories is easy with the -m parameter. This is a potent method since Pytest enables programmers to create custom labels called "marks" for tests. Individual tests may have several labels, allowing programmers granular control over which tests will run. 

#6 Test Parameterization

It's common for programmers to use Pytest to test functions that process data; therefore, programmers often write many similar tests. These tests may only be different in the input or output of the tested code. 

For this reason, programmers end up duplicating test code, which can sometimes obscure the behavior of the code they are attempting to test.

If you're familiar with unittest, you may know that there is a method to collect many tests as one. But the results of these tests do not appear in result reports as individual tests. Meaning, if all tests except one pass, the entire group of tests will return a failed result.

But this is not the case in Pytest since it features parameterization built-in, allowing every test to pass or fail independently.

#7 Plugin-Based Architecture

The customizability of Pytest makes it the go-to framework for programmers that want to test their code. Adding new features is easy since virtually every part of programs can be changed. No wonder there is a huge ecosystem of helpful plugins available for Pytest.

While some plugins only work with frameworks such as Django, most of the available plugins can be used with virtually all test suites. 

Using Fixtures to Manage State and Dependencies

As mentioned earlier, Pytest fixtures allow you to supply data to tests, test doubles, or describe the setup to the tests. The fixture functions are capable of returning wide ranges of values. Every test that depends on a fixture needs to be explicitly passed that fixture as an argument.

When to Use Fixtures

One of the best ways to understand when to use fixtures is to simulate a test-driven development workflow. 

Let's assume you need to write a function that processes data it receives from an API endpoint. The data comprises a list of people, with each entry having a name, family name, and job position. 

This function needs to output a list of strings with the full names, followed by a colon and their title. Here's how you'd approach this:

# format_data.py

def reformat_data(people):
    ...  # Instructions to implement

 

Since we're simulating a test-driven development workflow, the first order of business is to write a test for it. Here's one way to do this:

# test_format_data.py

def test_reformat_data ():
    people = [
        {
           "given_name": "Mia",
            "family_name": "Alice",
            "title": "Software Developer",
        },
        {
            "given_name": "Arun",
            "family_name": "Niketa",
            "title": "HR Head",
        },
    ]

    assert reformat_data(people) == [
        "Mia Alice: Software Developer",
        "Arun Niketa: HR Head",
    ]

 

Let's take things a step further and assume that you need to write another function to process the data and output it as comma-separated values for use in spreadsheets:

# format_data.py

def reformat_data(people):
    ...  # Instructions to implement 

def format_data_for_excel(people):
    ... # Instructions to implement

 

Your to-do list is growing, and with test-driven development, you can easily plan things ahead. This new function would look similar to the reformat_data() function:

# test_format_data.py

def test_reformat_data():
    # ...

def test_format_data_for_excel():
    people = [
        {
            "given_name": "Mia",
            "family_name": "Alice",
            "title": "Software Developer",
        },
        {
            "given_name": "Arun",
            "family_name": "Niketa",
            "title": "HR Head",
        },
    ]

    assert format_data_for_excel(people) == """given,family,title
Mia,Alice, Software Developer
Arun,Niketa,HR Head
"""

 

As you can see, both the tests must define the people variable again, and putting together these lines of code takes time and effort.

If several tests use the same test data, fixtures can help. With a fixture, you can put the repeating data in one function and use the @Pytest.fixture to show that the function is the fixture, like so: 

# test_format_data.py

import Pytest

@Pytest.fixture
def example_people_data():
    return [
        {
            "given_name": "Mia",
            "family_name": "Alice",
            "title": "Software Developer",
        },
        {
            "given_name": "Arun",
            "family_name": "Niketa",
            "title": "HR Head",
        },
    ]

# More code

 

Using the fixture is uncomplicated – you only need to add the function reference as an argument. Bear in mind that the programmer isn't the one calling the fixture function, and Pytest takes care of this. The return value of the fixture function may be used as the name of the fixture:

# test_format_data.py

# ...

def test_reformat_data(example_people_data):
    assert reformat_data(example_people_data) == [
        "Mia Alice: Software Developer",
        "Arun Niketa: HR Head",
    ]

def test_format_data_for_excel(example_people_data):
    assert format_data_for_excel(example_people_data) == """given,family,title
Mia,Alice, Software Developer
Arun,Niketa,HR Head
"""

 

Now, the tests are much smaller while having a distinct path back to the test data. Giving your fixture a name that stands out is the quickest way to identify and use it when you later add more tests to it.

When Not to Use Fixtures

Fixtures make it convenient to extract objects and data used across several tests. But using fixtures isn't always the right solution when the tests demand changes in the data. 

It is just as easy to litter a test suite with fixtures as it is to litter it with objects and data. The consequences of using fixtures can sometimes be worse since fixtures introduce an additional layer of indirection to the test suite.

Learning to use any abstractions takes practice, and while using fixtures is easy, it will take some time for you to find the right number to use. 

Regardless, you can expect fixtures to become an essential part of your test suites. As the projects become bigger, you will encounter the challenge of scaling. Thankfully, Pytest has some features that can help you overcome this challenge of complexity that entails growth. 

Using Fixtures at Scale

As the number of fixture extractions increases in your tests, you may notice that some fixtures could become more efficient with more abstraction. 

The Pytest framework fixtures are modular, meaning you can import them and other modules. Furthermore, the fixtures can depend on other fixtures and also import other fixtures.

Such massive flexibility allows you to create a suitable fixture abstraction according to the use case. 

For instance, fixtures in different modules may have a common dependency. In such cases, you can move the fixtures from test modules to general modules. This will allow you to import them into the test modules that need them.

Programmers may want to make a fixture available across a project without needing to import it, and this can be achieved with the conftest.py configuration module.

Pytest automatically looks for this module in every directory. If you add the general-purpose fixtures to this module, you can use those fixtures throughout the parent directory and subdirectories. 

You can also use conftest.py and fixtures to secure access to resources. For instance, if you write a test suite for code that deals with API calls, you will need to ensure that the suite does not make any calls, even if a programmer accidentally writes a test to achieve this. 

Pytest comes with a monkeypatch fixture, enabling you to replace behaviors and values, like so: 

# conftest.py

import Pytest
import requests

@Pytest.fixture(autouse=True)
def disable_network_calls(monkeypatch):
    def stunted_get():
        raise RuntimeError("Network access unavailable in testing")
    monkeypatch.setattr(requests, "get", lambda *args, **kwargs: stunted_get())

 

Putting the disable_network_calls() method in the configuration mentioned above module and setting the autouse option to true ensures that network calls are disabled throughout the suite. 

When your test suite is growing, you become more confident in making changes since you cannot break code unexpectedly. However, it may take longer to make changes as your test suite grows.

If making changes doesn't take long, there's a possibility that a core behavior may trickle down and break many tests. In such cases, it's best to limit the test runner to only running tests in specific categories. This is what we will discuss next.

Categorizing Tests

Large test suites can be detrimental when you need to quickly iterate on features, and preventing all the tests from running can save time. As mentioned earlier, Pytest runs the tests in the current working directory by default, but using the markers is also an excellent solution to this.  

In Pytest, you can define categories for your tests and supply options for including or excluding categories when the suite runs. An individual test can be in several categories.

Marking the tests is an excellent way to categorize tests by dependencies and subsystems. For instance, if some tests need access to a database, you could make a @Pytest.mark.database_access mark and use it.

Besides, adding the --strict-markers flag to the Pytest command ensures that the marks in the tests are registered in the Pytest.ini config file. This file will circumvent running all the tests before you explicitly register unknown marks.

If you only want to run the tests that need database access, you can use the Pytest -m database_access command. However, if you want to run all tests except the ones that need database access, all you have to do is run the command Pytest -m "not database_access".

You can pair this with the autouse fixture to limit access to tests marked to have database access. Some plugins add more functionality to the marking feature by adding custom guards. For example, the Pytest-django plugin has a django_db mark, and the tests without the mark cannot access the database. 

When a test attempts to access the database, Django will create a test database. The requirement you specify in the django_db mark leads to you stating the dependencies explicitly. 

You can also run tests that don't require a database much faster since the command you run will prevent the database creation in the first place. While the time savings may not be noticeable in smaller test suites, in larger ones, it can save you several minutes.

Some of the marks that Pytest includes by default are skip, skipif, xfail, and parametrize. If you want to look at all the marks that Pytest comes with by default, you can run Pytest --markers.

Conclusion

Pytest boasts productivity features that allow programmers to optimize the time and effort required to write and run tests. Furthermore, the flexible plugin system allows programmers to expand the functionalities of Pytest beyond the basics.

Whether a programmer is dealing with a massive legacy unittest suite or beginning a fresh project, Pytest can come in handy. And with this guide, you're ready to use it.