Viewing entries tagged
Dropwizard

Testing HTTP Clients Using the Spark Micro Framework

Comment

Testing HTTP Clients Using the Spark Micro Framework

Testing HTTP client code can be a hassle. Your tests either need to run against a live HTTP server, or you somehow need to figure out how to send mock requests which is generally not easy in most libraries that I have used. The tests should also be fast, meaning you need a lightweight server that starts and stops quickly. Spinning up heavyweight web or application servers, or relying on a specialized test server, is generally error-prone, adds complexity and slows tests down. In projects I'm working on lately we are using Dropwizard, which provides first class testing support for testing JAX-RS resources and clients as JUnit rules. For example, it provides DropwizardClientRule, a JUnit rule that lets you implement JAX-RS resources as test doubles and starts and stops a simple Dropwizard application containing those resources. This works great if you are already using Dropwizard, but if not then a great alternative is Spark. Even if you are using Dropwizard, Spark can still work well as a test HTTP server.

Spark is self-described as a "micro framework for creating web applications in Java 8 with minimal effort". You can create the steroptypical "Hello World" in Spark like this (shamelessly copied from Spark's web site):


import static spark.Spark.get;

public class HelloWorld {
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");
    }
}

You can run this code and visit http://localhost:4567 in a browser or using a client tool like curl or httpie. Spark is a perfect fit for creating HTTP servers in tests (whether you call them unit tests, integration tests or something else is up to you, I will just call them tests here). I have created a very simple library sparkjava-testing that contains a JUnit rule for spinning up a Spark server for functional testing of HTTP clients. This library consists of one JUnit rule, the SparkServerRule. You can annotate this rule with @ClassRule or just @Rule. Using @ClassRule will start a Spark server one time before any test is run. Then your tests run, making requests to the HTTP server, and finally once all tests have finished the server is shut down. If you need true isolation between every single test, annotate the rule with @Rule and a test Spark server will be started before each test and shut down after each test, meaning each test runs against a fresh server. (The SparkServerRule is a JUnit 4 rule mainly because JUnit 5 is still in milestone releases, and because I have not actually used JUnit 5.)

To declare a class rule with a test Spark server with two endpoints, you can do this:


@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(() -> {
    get("/ping", (request, response) -> "pong");
    get("/healthcheck", (request, response) -> "healthy");
});

The SparkServerRule constructor takes a Runnable which define the routes the server should respond to. In this example there are two HTTP GET routes, /ping and /healthcheck. You can of course implement the other HTTP verbs such as POST and PUT. You can then write tests using whatever client library you want. Here is an example test using a JAX-RS:


public void testSparkServerRule_HealthcheckRequest() {
    client = ClientBuilder.newBuilder().build();
    Response response = client.target(URI.create("http://localhost:4567/healthcheck"))
            .request()
            .get();
    assertThat(response.getStatus()).isEqualTo(200);
    assertThat(response.readEntity(String.class)).isEqualTo("healthy");
}

In the above test, client is a JAX-RS Client instance (it is an instance variable which is closed after each test). I'm using AssertJ assertions in this test. The main thing to note is that your client code must be parameterizable, so that the local Spark server URI can be injected instead of the actual production URI. When using the JAX-RS client as in this example, this means you need to be able to supply the test server URI to the Client#target method. Spark runs on port 4567 by default, so the client in the test uses that port.

The SparkServerRule has two other constructors: one that accepts a port in addition to the routes, and another that takes a SparkInitializer. To start the test server on a different port, you can do this:


@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(6543, () -> {
    get("/ping", (request, response) -> "pong");
    get("/healthcheck", (request, response) -> "healthy");
});

You can use the constuctor that takes a SparkInitializer to customize the Spark server, for example in addition to changing the port you can also set the IP address and make the server secure. The SparkInitializer is an @FunctionalInterface with one method init(), so you can use a lambda expression. For example:


@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(
        () -> {
            Spark.ipAddress("127.0.0.1");
            Spark.port(9876);
            URL resource = Resources.getResource("sample-keystore.jks");
            String file = resource.getFile();
            Spark.secure(file, "password", null, null);
        },
        () -> {
            get("/ping", (request, response) -> "pong");
            get("/healthcheck", (request, response) -> "healthy");
        });

The first argument is the initializer. It sets the IP address and port, and then loads a sample keystore and calls the Spark#secure method to make the test sever accept HTTPS connections using a sample keystore. You might want to customize settings if running tests in parallel, specifically the port, to ensure parallel tests do not encounter port conflicts.

The last thing to note is that SparkServerRule resets the port, IP address, and secure settings to the default values (4567, 0.0.0.0, and non-secure, respectively) when it shuts down the Spark server. If you use the SparkInitializer to customize other settings (for example the server thread pool, static file location, before/after filters, etc.) those will not be reset, as they are not currently supported by SparkServerRule. Last, resetting to non-secure mode required an incredibly awful hack because there is no way I found to easily reset security - you cannot just pass in a bunch of null values to the Spark#secure method as it will throw an exception, and there is no unsecure method probably because the server was not intended to set and reset things a bunch of times like we want to do in test scenarios. If you're interested, go look at the code for the SparkServerRule in the sparkjava-testing repository, but prepare thyself and get some cleaning supplies ready to wash away the dirty feeling you're sure to have after seeing it.

The ability to use SparkServerRule to quickly and easily setup test HTTP servers, along with the ability to customize the port, IP address, and run securely intests has worked very well for my testing needs thus far. Note that unlike the above toy examples, you can implement more complicated logic in the routes, for example to return a 200 or a 404 for a GET request depending on a path parameter or request parameter value. But at the same time, don't implement extremely complex logic either. Most times I simply create separate routes when I need the test server to behave differently, for example to test various error conditions. Or, I might even choose to implement separate JUnit test classes for different server endpoints, so that each test focuses on only one endpoint and its various success and failure conditions. As is many times the case, the context will determine the best way to implement your tests. Happy testing!

Comment

Why isn't Dropwizard validating objects in resource methods?

2 Comments

Why isn't Dropwizard validating objects in resource methods?

Dropwizard provides automatic validation of Jersey resource method parameters by simply adding the @Valid annotation. For example, in a method to save a new Person object, you might have code like:


@POST
public Response createPerson(@Valid Person person) {
    Person savedPerson = save(person);

    URI location = UriBuilder.fromResource(PersonResource.class)
            .path(savedPerson.getId().toString())
            .build();

    return Response.created(location).entity(savedPerson).build();
}

By adding @Valid to the person argument, Dropwizard ensures that the Person object will be validated using Hibernate Validator. The Person object will be validated according to the constraints defined on the Person class, for example maybe the @NotEmpty annotation is on first and last name properties. If the object passes validation, method execution continues and the logic to save the new person takes place. If validation fails, however, Dropwizard arranges for a 422 (Unprocessable Entity) response to be sent back to the client, and the resource method is never actually invoked. This is convenient, as it means you don't need any conditional logic in resource methods to manually check if an object is valid. Under the covers, Dropwizard registers its own custom provider class, JacksonMessageBodyProvider, which uses Jackson to parse request entities into objects and perform validation on the de-serialized entities.

Out of the Dropwizard box this automatic validation "just works" due to the above-mentioned JacksonMessageBodyProvider. (For this post, we are assuming Dropwizard 0.8.2, which uses Jersey 2.19) It worked for us just fine, until on one service it simply stopped working entirely. In other words, no validation took place and therefore any objects, valid or not, were being passed into resource methods. Since resource method code assumes objects have been validated already, this causes downstream problems. In our case, it manifested in HibernateExceptions being thrown when data accesss code tried to persist the (not validated) objects.

This was quite perplexing, and to make a (very long) debuggging story short, it turned out that someone had added one specific dependency in the Maven pom, which triggered auto-discovery of the JacksonFeature via the JacksonAutoDiscoverable. The dependency that had been added (indirectly, more on that later) was:


<dependency>
    <groupId>org.glassfish.jersey.media</groupId>
    <artifactId>jersey-media-json-jackson</artifactId>
    <version>2.19</version>
</dependency>

If you look in the jersey-media-json-jackson-2.19.jar file, there are only five classes. But the upshot is that this JAR specifies auto disovery for Jackson via the Auto-Discoverable Features mechanism, which causes the JacksonFeature class to register JacksonJaxbJsonProvider as both a MessageBodyReader and a MessageBodyWriter. And due to the vagaries of the way Jersey orders the message body readers, that provider ends up as the first available MessageBodyReader when processing requests, which in turn means the Dropwizard JacksonMessageBodyProvider never gets executed, and as a result no validation is performed!

For some code spelunking, check out the WorkerComparator class in MessageBodyFactory (in Jersey) which is used when sorting readers and writers via a call to Collections.sort(). The Javadoc for the comparator states "Pairs are sorted by distance from required type, media type and custom/provided (provided goes first)." In particular the last bit of that sentence is key, provided goes first - this means the auto-discovered feature (JacksonJaxbJsonProvider) takes precedence over the custom provider registered by Dropwizard (JacksonMessageBodyProvider).

Of course now that we know what is going on, the solution is pretty easy:

Make sure you don't have the jersey-media-json-jackson dependency, either directly or via a transitive dependency.

In our case it had actually come in via a Maven transitive dependency, which made tracking it down harder. An easy way to see exactly what dependencies exist and where they are coming from you can use mvn dependency:tree to display the entire dependency tree for your application.

Ultimately, while Jersey provides the auto disovery mechanism, I still prefer explicit configuration so it is very clear exactly what features are present. Dropwizard abstracts some of this from us, i.e. it registers the JacksonMessageBodyProvider in the AbstractServerFactory class (in the createAppServlet method), but since Dropwizard is mostly "just code" it is much easier to deterministically know what it is and isn't doing. So if you suddenly experience a lack of validation in Dropwizard, make sure the jersey-media-json-jackson dependency is not present. If that doesn't work, then you need to figure out what other MessageBodyReader is taking precedence, determine its origin, and eliminate it!

Sample code for this post can be found on GitHub.

2 Comments