lricardo.space

Testcontainers with FastAPI, with unittest and pytest

In the article unittest.TestCase Lifecycle we learnt about the life-cycle hooks that can be used to manage test context. In this article I aim to explain how to apply that in order to make testcontainers delightful to use, as in Java!

The Java approach

Before we go deeper on my approach, I think that studying and understanding what people do in other languages and frameworks is important. Spring + Testcontainers approach in Java is neat:

@SpringBootTest
@Testcontainers
class MyIntegrationTests {

    @Container
    @ServiceConnection
    static Neo4jContainer<?> neo4j = new Neo4jContainer<>("neo4j:5");

    @Test
    void myTest() {
        // ...
    }

}

In a paragraph: @Container automatically bootstraps testcontainers and initializes it while @ServiceConnection discovers the type of the annotated container and creates the needed context (Java beans) so that Spring Boot autoconfiguration related with Neo4j discovers the beans and configures the driver automatically.

Confused? Let’s simplify this to the absolute needed: we throw two annotations and everything works. Zero configuration, zero hassle. Even if you don’t understand what does this means, I’m sure you’ll want it in Python too!

The Python approach

With testcontainers-python we will get a severe depression: the documentation is not very complete and fitting it into the existing Python testing frameworks doesn’t seem to be clear everywhere else, as you can see here.

So our goal here is to present some constructs to make testcontainers-python simpler to use.

Approach 1: Usage with unittest

This approach is more fitting to uniittest.TestCase and consists on creating a base class where we will do the hard job of starting and stopping the the container - and other context.

First of all, we will do a class called IntegrationTestCase:

class IntegrationTestCase(TestCase):  
    container: DbContainer = PostgresContainer()  
    restart: bool = False

In this class, container and restart are class attributes, meaning that they can be accessed by the class itself and all instances of the class. The attribute container is our container, that will be started and stopped, restart controls whether we want restart or not the container on each test.

Why did I chose to use class attributes instead of instance attributes? Simply because setUpClass and tearDownClass are class methods and we need to access these attributes from them:

@classmethod  
def setUpClass(cls) -> None:  
    if not cls.restart:  
        cls.__setup_context__()  
  
def setUp(self) -> None:  
    if self.restart:  
        self.__setup_context__()  
  
def tearDown(self) -> None:  
    if self.restart:  
        self.container.stop()  
  
@classmethod  
def tearDownClass(cls) -> None:  
    if not cls.restart:  
        try:  
            cls.container.stop()  
        except NotFound:  
            return

The methods setUpClass/tearDownClass and setUp/tearDown are antagonic. The logic of this code block is straightforward:

  • If we want to restart the container each time a new test start to run, we do it inside setUp, which is run before the test is. In opposition, we stop as soon as test ends (inside tearDown).
  • If we want to use the same container to all tests, we should not use setUp but rather setUpClass. It will run once, before all tests are run. In contrast, tearDownClass is run at the end, after all tests are run. Therefore, starts once, stops once.

The class method __setup_context__ was created in order to avoid code repetition, and in my specific ambit, is used to create the context: FastAPI test client, start the container, inject container settings, etc. Note that SqlAlchemyBootstrap is a custom class of mine that implements utility methods for initializing the test DB.

@classmethod  
def __setup_context__(cls):  
    cls.container.start()  

	# Although container provides a method for getting the connection
	# string, it doesn't work correctly with asyncpg
    postgres_conn_str = cls.container.get_connection_url().replace('postgresql+psycopg2', 'postgresql+asyncpg')  
    sqlalchemy_bootstrap = SqlAlchemyBootstrap(postgres_conn_str)  
    sqlalchemy_bootstrap.init_database()  
  
    app.dependency_overrides[get_session] = sqlalchemy_bootstrap.get_session  
  
    cls.client = TestClient(app)

To use this class we have done, we only need to extend it and customize class attributes as we go:

class UserAPIEnd2EndTestCase(IntegrationTestCase):  
    restart = True

Approach 2: Usage with pytest

Pytest is much simpler on its approach. To make a test in Pytest, we only need to do the following:

import pytest

def test_something():
	assert True is True

This is simpler and shorter, when compared to unittest which results in a different approach: given that Pytest tests are functions (not methods), we don’t have setUp or tearDown. Therefore, we have to implement an alternative using fixtures.

What is a fixture? A fixture is a setup system to initialize a test. In the context of Pytest, a fixutre is a function that provides a dependency by name.

How do we setup the same example as before? Take a look at the example below:

@pytest.fixture  
def container():  
    container = PostgresContainer()  
    yield container.start()  
    container.stop()  
  
  
@pytest.fixture  
def test_client(container):  
    postgres_conn_str = container.get_connection_url().replace('postgresql+psycopg2', 'postgresql+asyncpg')  
    sqlalchemy_bootstrap = SqlAlchemyBootstrap(postgres_conn_str)  
    sqlalchemy_bootstrap.init_database()  
  
    app.dependency_overrides[get_session] = sqlalchemy_bootstrap.get_session  
  
    return TestClient(app)

We have two fixtures, container and test_client, whereas test_client depends on container. When we add a test, we will inject test_client, like this:

def test_get_all(test_client):  
    response = test_client.get('/users/')  
    assert response.status_code == 200

When we run the test, the following will happen:

  • test_get_all is invoked but requires test_client
  • test_client is invoked but requires container
  • container has no dependencies, so it runs and yields the container
  • test_client runs and returns
  • test_get_all runs until the end
  • and… container resumes, stopping the container in the end

In this version, the container will restart every time a test function runs. If we want to re-use the container across all tests, we only need a small change:

@pytest.fixture(scope='module') 
def container():  
    container = PostgresContainer()  
    yield container.start()  
    container.stop()  

Note that the parameter scope is now set to module inside the decorator. By default, the scope is set to function - meaning that the function would run per function. Setting it to module it will only run once per test module, making the container start and stop once.