Testing a REST-API - Restful Booker Worked Example

by Emily Bache

This article accompanies the video Beyond Code Katas - Approval Testing a REST API.

Code katas are great for practicing your skills. By necessity though these are small problems, and your production code is on a completely different scale. In this article I want to show you a kind of code kata that’s a little bit larger. It’s a back-end service with a database, written in node.js. It has a REST API and is backed by a Mongo database. It’s a little bit closer to a real world system than your average code kata. Let’s look at a good way to get this code under control using TextTest and CaptureMock.

The System under Test - Restful Booker

The code for Restful Booker is a worked example under the TextTest project on Github. It’s a fork based on Mark Winteringham’s code. Mark does workshops and trainings for testers, and he created this application to help teach good strategies for exploring and manual testing an API. Restful Booker is deliberately designed to be buggy, often in subtle ways. It’s also designed with fairly comprehensive documentation so you can assess the value of that too.

I’m using this code slightly differently - my concern is how to write automated tests for this service that will be reliable, fast and comprehensive enough to support refactoring, and not too much work to create or maintain. I’ve used TextTest and CaptureMock in this role on a microservices production system I worked on previously, and I’m also drawing on the experiences of Geoff Bache testing several other service-based and microservices systems. So even though this is a toy example, I think it illustrates a viable approach.

The work the video doesn’t show

What’s not shown in the video is all the preparation work we did to ‘Sandbox’ the Restful booker service, and to add the OpenAPI interface description to enable Swagger. The actual changes to application code were fairly minimal, but it was a reasonable amount of work to set up the test rig and other configuration files that TextTest uses. As I showed in the video though, once you’ve done that, it’s relatively quick to create tests. This is typical - the ‘Sandboxing’ process for the large complex production systems Geoff and I have worked on has taken some weeks (even months) of expert attention. Once that is set up though, a much larger group of developers and testers (even less technical or manual testers) can create and maintain tests.

In this article I’d like to go into a little more detail about this sandboxing process for Restful booker so you can get a better idea of what it might involve for your similar production systems that have a REST API.

Enabling Swagger

This may be something you’ve already added to your REST API. It’s useful to have this as documentation, totally independently of how you’re going to test the service. The basic principle is you create an Open API specification file for the API, either by hand or using a tool that can generate it from your sourcecode. For Restful Booker, Geoff set up Swagger with this swagger.json configuration with help from Swagger UI Express.

Enabling CaptureMock to intercept Swagger calls

This is the only change to the actual application code, in app.js:

let options = {};
const capturemock = process.env.CAPTUREMOCK_SERVER;
if (capturemock) {
    const requestInterceptorStr = "(req) => { req.url = req.url.replace(/http:..(|localhost):[0-9]+/, '" + capturemock + "'); return req; }";
    options["swaggerOptions"] = {
        requestInterceptor: eval(requestInterceptorStr)
app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument, options));

The first and last lines would have been there anyway - they are enabling the swagger documentation to appear on the url /api-docs. The additional code checks for the environment variable CAPTUREMOCK_SERVER and does nothing if it is not set. The rest of this code is a little bit awkward to understand. It’s setting an option to insert a requestInterceptor - a little piece of code that will be called whenever you use Swagger. This enables CaptureMock to be a ‘man in the middle’ intercepting and processing all Swagger-generated traffic between the browser and the Restful Booker.

In the video demo you see the ‘httpmock’ files that detail this kind of traffic. CaptureMock generates these files from the traffic it intercepts.

Configuring TextTest’s Test Rig

Key to this testing approach is being able to start and run your entire application, including the database, from a command-line script. Normally you’d start Restful Booker using npm start, and TextTest could run that directly, but there is actually a bunch of other stuff that needs configuring too. Often it’s convenient to write a Test Rig for an application under test. It’s an additional script that does some setup before doing the equivalent of ‘npm start’, and may do some extra processing of results at the end.

The Test Rig for Restful Booker is a Python script test_rig.py. I show it briefly in the video. It makes use of the companion tools DBText and CaptureMock. This is an overview of what it does:

This script is specifically written for the application being tested, and will be the same for all the tests in the test suite. Anything that should be different between test cases will be specified in text files. Any code that is generic or re-usable will be extracted into either TextTest itself or be part of DBText or CaptureMock.

The config file - pulling everything together

The main configuration file I haven’t discussed yet is config.rb. The ‘rb’ extension might confuse you - it stands for ‘Restful Booker’, not Ruby! TextTest has a slightly unusual convention that it uses the file extension to designate which application is being configured for testing, not the type of the file.

This file is used by TextTest to find out everything it needs to know in order to run all your tests.

Sample tests - on the ‘with_texttests’ branch

In the main branch there are no test cases defined - this is the starting point for the demo. If you change to the ‘with_texttests’ branch then you will find test cases for all of the endpoints, not only the two I show in the demo.

Each test case is specified by a folder containing text files. The name of the folder is the name of the test case. The order to run them in is specified in the testsuite file. Each test case folder contains all the files that are unique to that test - usually 4 files.

When TextTest runs the test, it will execute the test rig, and afterwards compare the contents of all of these files against the actual output it got. Any difference will fail the test. (Although it is configured to scrub or filter certain aspects of these files before doing that comparison)

There are bugs yet the tests pass

As I said earlier, Restful Booker is intentionally buggy, so that it makes a good exercise for testers. For example in this httpmocks file:

<-CLI:DELETE /booking/2
--HEA:Authorization=Basic YWRtaW46cGFzc3dvcmQxMjM=
->SRV:201 Created
--HEA:Content-Type=text/plain; charset=utf-8

You can see it’s recorded an interaction where you’re trying to delete a booking. The server response is ‘201 Created’. This is almost certainly the wrong HTTP code for this situation. Yet, I have approved this output and stored it in my test case. In a real application situation I would not have approved this - I would have fixed it! My purpose here is to show how to automate these tests, not to enumerate and fix all the bugs :-)

What else would you like to know? Please leave a comment on the video or the TextTest-users mailing list.