Isolating Integration Tests in Go


Integration tests are critical when an application talks to an SQL database. We have to make sure that our application works as expected by testing the functions that run SQL statements.

A common problem here is that, when tests are not isolated they can interfere with each other. Let me give an example.

Let’s say we have a test that inserts a row, then queries it. A second test might delete all rows before the first one queries them. This would make the first test fail.

In other words, if there are many tests modifying the same tables, the execution order will change the results. The test suite will be non-deterministic.

This lack of isolation makes the whole test suite useless.

A naive solution to this problem would be to run the tests sequentially. But this would make them much slower.

So how to make these tests safe to run in parallel?

Luckily, there are ways to do this by isolating each test. In this post I would like to cover 2 of these methods.

Running each test inside a transaction

One way we can do this is by running each test in a transaction. Before each test, we can set up a transaction, run our test in it, and then rollback.

This way each test will run in their own transaction, completely isolated.

Let’s see how we can set this up.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
package test

import (
	"database/sql"
	"os"
	"testing"

	// Postgres driver
	_ "github.com/jackc/pgx/v4/stdlib"
)

// SetupTX sets up a database transaction to be used in tests.
// Once the tests are done it will rollback the transaction
func SetupTX(t *testing.T) *sql.Tx {
	t.Helper()

	dbURL := "postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable"
	u := os.Getenv("DATABASE_URL")
	if u != "" {
		dbURL = u
	}

	db, err := sql.Open("pgx", dbURL)
	if err != nil {
		t.Fatalf("Failed to initialize db. Err: %s", err.Error())
	}

	tx, err := db.Begin()
	if err != nil {
		t.Fatalf("Unable to begin tx. %v", err)
	}

	t.Cleanup(func() {
		tx.Rollback()
		db.Close()
	})

	return tx
}

This helper function first gets the database URL. It defaults to the address of the Postgres container. But we can provide another one in case we want to run our tests against a remote database.

Next, it opens a database connection. Connecting to a remote database is an expensive operation. However, connecting to a local database is instant, so opening a new connection in each test shouldn’t be a problem.

sql.Open returns a connection pool, and it is advised to only call it once. This can be a problem. We are opening a connection with every test, and if at any point in time we have more concurrent tests than the max number of connections of the Postgres server, some tests would fail because of rejected connections.

In such a scenario we could call sql.Open in TestMain() and assign the connection to a global variable. And if we configure the max open connections correctly, the tests would stop failing.

Another problem with this approach is that the helper function returns an sql.Tx struct. This means we won’t be able to use it in place of sql.DB. We need to define an interface next to the consumers.

Something like this:

1
2
3
4
5
type DB interface {
  QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row
	QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
	Exec(query string, args ...interface{}) (sql.Result, error)
}

sql.Tx also satisfies this interface.

Next, we’ll replace sql.DB with our DB interface.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
type DB interface {
  QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row
	QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
	Exec(query string, args ...interface{}) (sql.Result, error)
}

type UserRepo struct {
  db DB
}

func NewUserRepo(db DB) UserRepo {
  return UserRepo{db: db}
}

// ...

Now we can use it in our tests.

1
2
3
4
func TestSomething(t *testing.T) {
  db := test.SetupTX(t)
  // ...
}

One thing to note here is that, you can’t use this helper to test functions that start a transaction.

Running each test in a separate schema

Many SQL databases support creating schemas. One use case of schemas is that it allows us to create separate environments within a single database. By default, tables are stored in the public schema.

We can use this feature to isolate our tests into their own schemas.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
package test

import (
	"database/sql"
	"os"
	"strings"
	"testing"

	// Postgres driver
	_ "github.com/jackc/pgx/v4/stdlib"
)

// SetupDB sets up a database connection to be used in tests.
// It creates a new schema with the t.Name().
// Once the test is complete, it will drop the created schema and close the db connection.
func SetupDB(t *testing.T) *sql.DB {
	t.Helper()

	dbURL := "postgres://postgres:postgres@localhost:5433/postgres?sslmode=disable"
	u := os.Getenv("DATABASE_URL")
	if u != "" {
		dbURL = u
	}

	db, err := sql.Open("pgx", dbURL)
	if err != nil {
		t.Fatalf("db initialization failed. err: %v", err)
	}

	schemaName := strings.ToLower(t.Name())

	t.Cleanup(func() {
		_, err := db.Exec("DROP SCHEMA " + schemaName + " CASCADE")
		if err != nil {
			t.Fatalf("db cleanup failed. err: %v", err)
		}
		db.Close()
	})

	// create test schema
	_, err = db.Exec("CREATE SCHEMA " + schemaName)
	if err != nil {
		t.Fatalf("schema creation failed. err: %v", err)
	}

	// use schema
	_, err = db.Exec("SET search_path TO " + schemaName)
	if err != nil {
		t.Fatalf("error while switching to schema. err: %v", err)
	}

	return db
}

This helper connects to the database, creates a schema with the name of the test and switches to it. After the test is done, it drops the schema. This example is for PostgreSQL but can be adapted to MySQL easily.

It returns sql.DB, so we don’t have to define a custom interface in our consumers.

It may be slightly slower than running tests in transactions, but it shouldn’t be a problem though since Go runs fast enough.


In the end, I would argue that it doesn’t matter which one you use. The idea is to isolate integration tests and make sure the test suite is deterministic.

I hope this was helpful. If you find any mistakes or have any remarks, please let me know by dropping a comment. Any feedback would be appreciated.

Code examples can be found here


Related Pages

Leave a comment