When to consider Contract Testing?

When to consider Contract Testing?

Contract testing is a form of automatic testing. If we would like to put them on the testing pyramid, they would have to squeeze it in between integration testing and E2E/API testing. But what's so wrong with those alternatives that we came up with another approach? Let's start with a closer look at what it is about and evaluate what cases would go along with it!

Contract Testing is a software testing methodology used to verify components of a system in isolation while ensuring that the services we provide align with our consumers' expectations. This approach is particularly useful in environments with multiple producers and consumers, such as microservices architectures.

The analogy often used to describe contract testing is that it's like unit testing for architecture. This comparison is apt because, unlike more comprehensive end-to-end testing, contract testing does not traverse the entire system from end to end. Similar to how unit tests focus on specific functionalities without delving into the entire software behaviour, contract testing primarily emphasizes the connectivity and compatibility between services.


TLDR: Cost and speed.

It's all about ROI. One of the advantages of working with classical architecture monolith, even the modular one, is the ease of working within one solution or executable. Different parts and modules of the project don't need a network to communicate. When they do, they do see their APIs (or rather methods signatures) during compilation, so desynchronising two elements will be trivial to notice.

Things get complicated when it comes to microservices. They need to communicate over the network to provide business functionalities. We cannot rely on the compilator. The modules are independent and distributed. They are two separate applications probably with different testing and deployment pipelines. Probably the first thing that comes to mind is integration or end-to-end testing. This is the quite obvious and partially correct answer to this question. Not without the reason they have been a part of the classical testing pyramid, being introduced among a lot of our QA activity. To be fair they often work fairly well in a lot of architectures with modular monolith surrounded with a few orbital services. So what's the issue?

It doesn't scale well with microservices. Big time. Big enough that some even go as far as calling E2E tests a scam. The most concerning part is that when dealing with heavily interconnected projects, the effort has to diverge from the pyramidal shape. It's caused by the fact that the E2E tests scale according to the number of consumers and recipients within a given architectural setup instead of the length of the functional flow being tested.

What is contract testing and why should I try it?

Besides just the effort needed to keep E2E healthy, with true microservice architecture the infrastructure cost of such an approach has to be closely monitored. It's rather obvious you don't want to run those test runs against your development environments. So we move to the testing ones, for each of the microservices. Remember about database separation if you plan on sticking to the guidelines. Networks and clusters matter a lot too. End of the day, we're talking about spinning up the whole fleet of infrastructure items to test valid service connectivity. One can argue - Are we still testing the connectivity? Or reliability of HTTP calls? Or rather our CI/CD and DevOps skills?

Alternative in two flavours

So let's check the alternative in the form of Contract Testing. The main selling point or rather the headline confirmation that we are looking for is that:

If the consumer and producer are tested independently using the same contract, it means that they can properly communicate.

We create one shared contract that defines how the services should interact and we use it in the tests for both sides of communication. To be specific, the contract is usually a set of request & response lines between the consumer and producer. Usually, in the form of a YAML or JSON file, but there are plenty of exceptions here (as it doesn't really matter).

Now, if Team A changes for example the response on their endpoint, it must be reflected in the contract. Otherwise, its own tests will fail. Since it’s changed in the contract, the consumer’s tests are also updated and if the change is anyhow different from the customers' expectations or it's not backwards compatible - it will fail too. This forces the teams to inform each other about the changes. Verbally, by PR or in any other fashion. The classic case goes like this: Imagine we want to remove or change the field in the response. Having the contracts agreed upon with all the clients, we can simply check whether the field is used or not. This gives us more flexibility in evolving the APIs.

Depending on the framework we don’t need to maintain any additional tests or stubs. Producer’s tests are generated from the contract. We don’t have to implement them on our own. They confirm that API works as defined in the contract. On the other side, the consumer tests use the stubs which are also generated from the same contract. This makes sure that both sides are synchronized in terms of communication and its tests. Also, it reduces the code we have to write by hand. Finally, the contract is a kind of documentation of services’ coupling. It’s created and agreed upon by both parties of communication, thus it documents their agreements. Moreover, some frameworks allow the generation of OpenAPI documentation from the contract.

It's convenient to have a single source of truth for API testing, but a question arises:

Who is responsible for defining and maintaining the contract?

Depending on the relations between the involved services there are two ways to tackle that...


In a consumer-driven approach, the consumer drives changes to contracts between a consumer and a provider. This may sound counterintuitive, but it helps providers create APIs that fit the real requirements of the consumers rather than trying to guess these in advance.

The consumers start by creating tests against a provider mock. Expected responses defined in the provider mock essentially define the contract they expect the real provider to fulfil. Contracts are generated from the expectations defined in the provider mock as a result of a successful test run. CDC frameworks like Pact provide a specification for contracts in JSON format consisting of the list of requests/responses generated from the consumer tests plus some additional metadata.

If you expected some code check out this example for basic Order API by Pact foundation

On the provider side tests are also executed as part of a separate pipeline which verifies contracts against real responses of the provider. Contract verification fails if real responses differ from the expected responses as specified in the contract. If that happens this basically means that:

  1. Invalid expectations on the consumer side caused incompatibility with the current provider implementation

  2. Broken provider implementation due to some missing functionality or a regression

Either way, thanks to that it is easy to pinpoint what and where went down. Kind of like integration tests, right? But much faster, with less infrastructure demand and without digging into each other business logic.


There are a number of cases where CDC sadly can be problematic. One is where a service is used by a lot of consumers. The strategy doesn't scale well with lots of consumers all providing contracts with slight variations in expectations.

Additionally, CDC may not be applicable when consumers are unwilling or unable to provide valid contracts. This could be due to external providers, uncooperative teams, or technical constraints. In complex architectures, it's not always clear who exactly is calling a service, further complicating contract management.

In such scenarios, a shift in perspective may emerge where the service provider assumes less responsibility for compatibility with consumer releases. Instead, consumers are expected to ensure their compatibility with newer service versions.

An alternative approach is Producer-Driven Contract Testing, where the upstream service (provider) defines the API contract and instructs consumers on integration. If consumers are not fully engaged in contract testing, these contracts can still serve as documentation or sample data.

Bonus: Bi-Directional

The approach mentioned earlier can sometimes pose challenges in determining which of the two interacting services is upstream. It's not uncommon to encounter a service that alternates between being a customer and a provider depending on the context.

For such scenarios, it's worth considering a broader approach to Bi-Directional Contract testing. In the simplest of words, it's a sum of both. Both services create contracts based on their mocked tests. Contracts are uploaded to the tool like Pact Platform. Internal tools compare those to contracts. For the communication to be green-flagged, the consumer contract must always be at least a subset of the provider one. Of course schema-wise.

Implementing this approach across our microservice landscape will provide us with a clear understanding of how our services will be impacted by upcoming changes, helping us navigate our rollout more effectively.

It's cool until...

If you've been using integration testing before, there's a common pitfall to be aware of when transitioning to contract testing. It's easy to start overusing contract testing by treating it like integration tests themselves. Contract testing should focus on testing the contracts of services that communicate synchronously or asynchronously— not on testing internal logic.

You need to adopt the mindset of verifying whether you're receiving the expected outcomes and handling responses correctly. Of course, if the contract specifies positive and negative statuses, it's reasonable to check those. However, you shouldn't feel obligated to test every possible response—that responsibility should fall on the unit tests of the other party.


  • Contract testing is an alternative to integration testing

  • Works best in communication-heavy architectures, including microservices

  • It focuses on the contracts rather than testing each other's logic

  • Choose a consumer, producer or bidirectional approach based on the topography

  • Keep it about contracts only, focus on connectivity and cover the rest in unit tests