André Torres logger.debug("welcome")

Containers with Docker

In this post we are going to see a few things:

Docker: What is it?

I could try to explain what a container is, but, the explanation found in the Docker website probably do a better job. The section What is a container? from the Docker website, describes containers as a unit of software that packages up the code and all its dependencies. This way applications become more reliable when changing from an environment to another. Containers are lightweight, standalone, executable package of software.

What is the problem that Docker solves?

Why should I be using docker?

Containers help you to deal with many problems:

  • Unlike full virtualization you don’t need a Hypervisor and a Guest O.S., all this overhead is excluded and it’s way more lightweight than a VM.

  • Containers are deterministic, meaning that if the container runs in your machine, it will run everywhere. No problem with people adding or changing a dependency on the VM manually and not informing other people.

  • Easy to distribute, with a registry you can upload your container image and distribute easily, so you can have your pipeline to build a new image every time that something is merged to a branch and this will be available to everyone in QA, or you can have all the dev environment inside a container, so when someone new joins they can just pull the latest dev image and start work.

  • With an orchestration tool it’s really easy to bring external dependencies like PostgreSQL, Redis. So if you have to run integration or end-to-end tests that require an empty database or something like wiremock, with docker-compose it’s really easy.

What this is not?

Before starting I need to clarify some things:

  • Docker containers are Linux containers, meaning that when you run Docker on your Windows or Mac it will start a VM under the hood.
  • Docker will not solve all your DevOps problems
  • You still need knowledge about your environment to deploy to production
  • Like every tool, it isn’t a silver bullet

With everything clarified, we can start.

Working with containers

Docker creates containers based on images that contain all the files needed to run the application, Docker will run the application and after that, the container will be erased, nothing will be saved inside the container.

We can start running our first container with:

$ docker run hello-world

The first thing it will show is that you don’t have the image hello-world so it will download it. But where is this image coming from? Docker has a service called Docker Hub that stores and versions docker images. It’s basically a GitHub for docker.

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

After the image is downloaded Docker will create a new container, run the application and stop the container when everything is done:

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
  1. The Docker client contacted the Docker daemon.
  2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
  3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
  4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
  $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
  https://hub.docker.com/

For more examples and ideas, visit:
  https://docs.docker.com/get-started/

This is a very basic thing, and I think we can do better. What about running a web server? We run docker run -p 8080:80 nginx and wait for the image to be downloaded.

And then we can access http://localhost:8080 and we are going to see Nginx’s Welcome page. But wait: this time we passed a new flag (-p) with some numbers. What do they mean?

Port

When dealing with containers that work through ports, we have to bind a local port to a container. Like all web servers, Nginx always runs on port 80. If we want to access that we need to bind this port to localhost. The idea behind binding a port is:

docker run -p <local port>:<container port> <image>

This is probably one of the most used commands that you will encounter.

Managing containers and Images

We want to be able to delete an image, stop a container that is running, and delete a previous container. To do that we have a set of commands:

  • docker ps: Show running containers. Add the -a flag to show stopped containers.
  • docker rm <contaier id or name>: Delete a container.
  • docker images: Show downloaded images.
  • docker rmi <image id>: Delete an image.

Now we are able to do housekeeping on our images.

docker-compose

Running containers manually works but it isn’t helpful as we want to be. We want to be able to create containers easily, with the same configuration every time.

That’s where docker-compose enters. docker-compose is an orchestration tool that can manage multiple containers at the same time for us. It’s based on a YAML file where we specify the container we want and it does all the work for us.

A basic use case for using docker-compose is to help to manage external dependencies of your application in a development environment. You can just set up a docker-compose file with your database, cache and SMTP server, and anyone running the application can easily start the containers with those dependencies.

Real world example

The best way to learn something is by breaking production and the idea now is to use docker-compose in a real-life situation.

We have a java application where we can add and retrieve users through a RESTful API.

Those users are stored in a PostgreSQL database, but we don’t want to install Postgres. So we are going to use docker-compose to provide Postgres to everyone who clones the application.

Anatomy of a docker-compose script

  version: '3.1'
  
  services:
  
    db:
      image: postgres:10
      ports:
        - "5432:5432"
      environment:
        POSTGRES_PASSWORD: postgres
        POSTGRES_DB: realworld
  • version tag indicates the minimum version of docker-compose that can be used with the script
  • services: this is where our containers will be declared. We give a name to our service and we specify what we want for that service. In this case, we have a PostgreSQL 10 image. postgres:10 is the name of the image and the version separated by a colon.
  • ports: we are binding the port 5432 in the container to the 5432 in our localhost like we did previously with the nginx container.
  • environment: set environment variables like POSTGRES_PASSWORD and POSTGRES_DB to define the password and create a database named realworld. Usually, you can see possible variables in the documentation for the Docker image in Docker Hub.

Now, if we run docker-compose up -d (the -d flag means detached so the process keeps running in the background), we will able to see the Postgres instance running.

So if we build a jar of the application it will be possible to run the application without any problem.

# This will build a jar with all the dependencies
$ ./gradlew shadowJar 

# You can run the application 
$ java -jar build\libs\realworldkata-1.0-SNAPSHOT-all.jar

Now, we can access http://localhost.com:4321/database and we should see Tables created! if everything is working. We can double check by accessing the container and checking if we see the table inside the database.

First, we check the ID of the container:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1180df37d9f1        postgres:10         "docker-entrypoint.s…"   About an hour ago   Up About an hour    0.0.0.0:5432->5432/tcp   realworld_db_1

Then we access the container and use the psql application to connect to the database and we can list the tables.

$ docker exec -it 118 /bin/bash

$ root@1180df37d9f1:/# psql -U postgres
psql (10.7 (Debian 10.7-1.pgdg90+1))
Type "help" for help.

$ postgres=# \c realworld
You are now connected to database "realworld" as user "postgres".

$ realworld=# \dt
          List of relations
  Schema | Name  | Type  |  Owner
--------+-------+-------+----------
  public | users | table | postgres
(1 row)

Building images

Now we can use images created by other people, but what if we want to use our own images? Can we use Docker to distribute our application? Of course; this isn’t amateur hour.

You can build images through a Dockerfile where we are going to specify our dependencies and how to build and run the application.

First, we start to build our Dockerfile by specifying a base image. This base image can be a Ubuntu or a Java image. For our application, we are going to use the adoptopenjdk/openjdk11-openj9 image, which is the OpenJ9 implementation by Eclipse Foundation.

FROM adoptopenjdk/openjdk11-openj9

With the base image in hand, we can move to gather our source code to build the application. For that, we need to set WORKDIR and use the COPY command to get our source files.

FROM adoptopenjdk/openjdk11-openj9
WORKDIR /realworld
COPY . /realworld

We have the sources; we need to build our application now, so we need to RUN the command to generate a fat jar with all the dependencies.

FROM adoptopenjdk/openjdk11-openj9
WORKDIR /realworld
COPY . /realworld
RUN ./gradlew shadowJar

This is a web application that receives requests in a TCP port. To be able to receive requests in the container we EXPOSE the port that we want. The EXPOSE will say which port the container should expose to the docker network that docker-compose will create and also work as documentation to see which port you have to bind when running a container.

FROM adoptopenjdk/openjdk11-openj9
WORKDIR /realworld
COPY . /realworld
RUN ./gradlew shadowJar
EXPOSE 4321

Finally, we can run the application by passing the CMD to start

FROM adoptopenjdk/openjdk11-openj9
WORKDIR /realworldCOPY . /realworld
RUN ./gradlew shadowJar
EXPOSE 4321
CMD ["java", "-jar", "build/libs/realworldkata-1.0-SNAPSHOT-all.jar"]

With the Dockerfile ready, we can build an image and create a container from it.

$ docker build . --tag "realworld"

$ docker run realworld
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.RuntimeException
        at realworld.infrastructure.ConnectionFactory.build(ConnectionFactory.java:14)
        at realworld.persistence.daos.UserDAO.<init>(UserDAO.java:18)
        at realworld.Application.buildUserService(Application.java:41)
        at realworld.Application.setRoutes(Application.java:28)
        at realworld.Application.start(Application.java:24)
        at realworld.Application.main(Application.java:65)

The --tag flag is to give the image a name. You can also give a tag like dev or staging.

The application is running as it should; the error is happening because the container doesn’t have access to the other container that runs postgres. Now that we know that everything is working, we can start to improve some parts of the container build.

.dockerignore

Like git that have a .gitignore, docker has .dockerignore, a file that exclude files from the copy of your container. Let’s create one to not copy the IDE-specific and output folders, so we have a faster build.

.gradle/
.idea/
build/
out/

Multi-stage build

We are creating the docker image with all our source code, and since we just want to run our application we don’t have to distribute them with our final jar. The source code for the application is inside the container, anyone with the access to the image will be able to see it, also this will make our image bigger without any need. To solve this we are going to do a multi-stage build.

What is a multi-stage build?

A multi-stage build is a way of splitting the process of building an image between multiple containers with distinct steps, kinda like a CI build. To transform your regular build into a multi-stage one is easy, you just have to add another FROM in your Dockerfile.

In our case, we want to split the environment that builds our application, and a lightweight one that will run.

First, we are going to deal with the container that we already have. There are a few things that we have to do:

  • Give a name for the build stage We do that by adding the as <name> in front of the base image

  • Then we have to remove the things that are to run the application For that we remove the EXPOSE and the CMD from the file but don’t delete, we are going to use it later.

Now we can start to create our image that will run the application:

  • Defining the image. We don’t need a complete image with Gradle or Maven. In fact, we don’t even need the JDK, we just need the JRE, so we can use the adoptopenjdk/openjdk11:jre-11.0.2.9-alpine image that only has the Java runtime. It’s based on a lightweight Linux distro called Alpine.

  • We can have the same WORKDIR from the previous stage.
  • Now we are going to COPY the jar that we build In this case, we are going to copy the jar from the container in the previous stage using the --from=build before the files that we want to copy.

  • Now we just have to put the EXPOSE and CMD that we saved previously.

Now when we build our container docker will spin up a container build the application and create another image using those files them delete anything from the previous steps, so no need write a cleanup script.

FROM adoptopenjdk/openjdk11-openj9 as build
WORKDIR /realworld
COPY . /realworld
RUN ./gradlew shadowJar

FROM adoptopenjdk/openjdk11:jre-11.0.2.9-alpine
WORKDIR /realworld
COPY --from=build /realworld/build/libs/realworldkata-1.0-SNAPSHOT-all.jar .
EXPOSE 4321
CMD ["java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

Add the image to docker-compose.yml

Now we have our image being built properly we can add to our docker-compose.yml, but unlike the postgres image that we have already, we want to build the image based in the Dockerfile and also we need to set up some environment variables to connect to the database.

So we add a new service to the file. Instead of using image, we are going to use build and pass the relative path to the Dockerfile that we want to build.

We map the port and add the environment variable for the DB_HOST pointing to our db service and the postgres port. Finally, we add the depends_on saying that we depend on the db service.

version: '3.1'

services:

  db:
    image: postgres:10
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: realworld

  realworld:
    build: .
    ports:
      - "4321:4321"
    environment:
      DB_HOST: "db"
    depends_on:
      - db

When we try to instantiate the containers, it isn’t working yet: why? I don’t know yet, let’s check the logs.

This is the important part of our logs: depends_on waits for the container to be ready but doesn’t wait to run things post-initialisation.

db_1         | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1         | creating subdirectories ... ok
db_1         | selecting default max_connections ... 100
db_1         | selecting default shared_buffers ... 128MB
db_1         | selecting dynamic shared memory implementation ... posix
realworld_1  | Picked up JAVA_TOOL_OPTIONS:
db_1         | creating configuration files ... ok
db_1         | running bootstrap script ... ok
realworld_1  | SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
realworld_1  | SLF4J: Defaulting to no-operation (NOP) logger implementation
realworld_1  | SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
db_1         | performing post-bootstrap initialization ... ok
db_1         | syncing data to disk ... ok

Basic troubleshooting

In this case, we have the “theory” that the application is trying to connect to the database before everything is ready. We have to test that if we run the application after the database is ready, everything will work.

The most basic thing that we can do to verify that is to spawn the containers and to start the application manually, but how can we connect to the container using docker-compose?

Just like running a single container in Docker, docker-compose provides the run method which we can use to access a shell inside our container.

# docker-compose run <service> <command>
docker-compose run realworld /bin/sh

This will give access to the container with the application allowing us to run:

$ java -jar realworldkata-1.0-SNAPSHOT-all.jar
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

if the application doesn’t throw any error related to the database, then it is working and we can proceed to create a solution to the problem.

Hacking our way through

How can this problem be solved? Adding a script that keeps checking if the database is up, and only runs the application when it’s ready, will do the work.

Modern problems, require modern solutions.

    #! /bin/sh
    
    # Set to exit on error
    set -e
    
    # We need to install the psql application to connect to the db
    apk add --no-cache postgresql-client
    
    # Keep in a loop until connects to the postgres database
    until PGPASSWORD="postgres" psql -h "${DB_HOST}" -U "postgres" -c '\q'; do
      >&2 echo "Postgres is unavailable - sleeping"
      sleep 1
    done
    
    >&2 echo "Postgres is up - executing command"
    # Exec the command with the arguments that were passed
    exec $@

We already have an image ready, so we could rebuild with the script and change the run command, but then we would have this check done everywhere that we are using the image, and we don’t want that because in other places we might be using a database that isn’t in a Docker container.

So we can override the command from our image and run the script.

The first thing we have to do is to put the script inside the container, we already have built the image so we can’t use COPY again, in this case, we can create a volume inside our container.

Volumes

A volume is a way to mount a folder from the host machine into a container. Everything inside that folder will be mirrored to the container. This is good when you want to save things like logs or to persist in a database that you are running in a container.

We change the docker-compose.yml to add our new features:

version: '3.1'

services:

  db:
    image: postgres:10
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: realworld

  realworld:
    build: .
    ports:
      - "4321:4321"
    environment:
      DB_HOST: "db"
    depends_on:
      - db
    volumes:
      - "./scripts:/scripts"
    command: ["/scripts/wait-for-db.sh", "java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

We added the volumes tag linking the scripts folder inside our application folder to a scripts folder in the root of our container, and we have a new command to be run with the script and the command to run the application.

Making our image more flexible with ARG

Right now the application is exposing the port “4321”, which is very inflexible. If any change is needed, the only way to do it would be having the mapping in the docker-compose file to map to a different port. This can be made more flexible using ARG in the Dockerfile.

What are the changes needed to do that?

Set the ARG keyword in the Dockerfile. This will receive the name of the argument and the default value. It’s good to set a default value in case you don’t want to be passing the value during the build every time.

# ARG NAME=<value>
ARG PORT=4321

Another thing to take care of when using ARG is the scope. You cannot use an ARG that is declared in a FROM after the one that you are working with. Think like variables during the code: you can’t use a variable before it’s been declared, nor use a variable that was declared inside another function.

With the ARG created, it’s time to set the environment variable PORT so the application knows which port to use. This can be done using the ENV keyword.

# ENV NAME $arg 
ENV PORT $PORT

Finally, we have to change the EXPOSE keyword to use the ARG instead of the hard-coded value.

EXPOSE $PORT

The final result would be:

FROM adoptopenjdk/openjdk11:jdk-11.0.2.9 as build
WORKDIR /realworld
COPY . /realworld
RUN ./gradlew shadowJar

FROM adoptopenjdk/openjdk11:jre-11.0.2.9-alpine
ARG PORT=4321
WORKDIR /realworld
COPY --from=build /realworld/build/libs/realworldkata-1.0-SNAPSHOT-all.jar .
ENV PORT $PORT
EXPOSE $PORT
CMD ["java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

And to build the container passing the argument:

# docker build . --build-arg ARG=<value>
$ docker build . -t realworld:ports --build-arg PORT=4332

To check if the container is running with the right port, you can see the exposed ports with docker ps

$ docker run -it realworld:args /bin/sh

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
10b6105d1db2        realworld:args      "/bin/sh"           5 seconds ago       Up 4 seconds        4332/tcp            gravel_pit

You can see in the column PORTS that it is exposing 4332/tcp just like was passed in the build args, but what if you want to do that with docker-compose? Is there any way of passing build-arg through the yml file? Of course.

Change the docker-compose.yaml to pass the argument in the build part using the args tag. Now that build will have multiple values. The key context has to be added to set the place where your Dockerfile will be.

# ...
  realworld:
    build: 
      context: .
      args:
        PORT: 4332
    ports:
      - "4332:4332"
    environment:
      DB_HOST: "db"
    depends_on:
      - db
    volumes:
      - "./scripts:/scripts"
    command: ["/scripts/wait-for-db.sh", "java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

With all the changes in place, just run docker-compose up and it’s possible to check if everything is running in the right port by trying to create the tables

$ curl localhost:4332/database
Tables created!

Entrypoints

CMD isn’t the only way to start a container in Docker, in fact before the CMD is executed a container has an ENTRYPOINT. Sometimes you want your container to do a more complex startup and to execute a few commands or scripts before starting your application. Docker will combine the ENTRYPOINT with the CMD passed to the container, so in the case that we have in the docker-compose.yml, the command that was just building the wait-for-db.sh could be split from the java command.

So if the Dockerfile looked like this:

FROM adoptopenjdk/openjdk11:jdk-11.0.2.9 as build
WORKDIR /realworld
COPY . /realworld
RUN ./gradlew shadowJar

FROM adoptopenjdk/openjdk11:jre-11.0.2.9-alpine
ARG PORT=4321
WORKDIR /realworld
COPY --from=build /realworld/build/libs/realworldkata-1.0-SNAPSHOT-all.jar .
ENV PORT $PORT
EXPOSE $PORT
ENTRYPOINT ["wait-for-db.sh"]
CMD ["java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

This would be executed like wait-for-db.sh java -jar realworldkata-1.0-SNAPSHOT-all.jar. The script can do multiple things and at the end execute the application.

Postgres does something like this. Instead of having the postgres command as the startup point, it executes a script that sets the folder where the data will be stored, sets the password in the right environment variable and checks if there is any .sql or .sh file to be run before starting the database.

docker-library/postgres

Don’t start your application with ENTRYPOINT. Use CMD so you can override the command with docker run.

Docker hub and container registry

Building the same docker image in every machine isn’t the most intuitive thing to do. You might want to use the image in another machine without having all the source code, but just with a docker-compose.yml.

The first thing is to register an account on Docker Hub and to login with that account in the command line:

$ docker login

After the login, you can push the images to the repositories. The repository is based on the tag name that you give when building the image. When building the first image there was the --tag flag to give a name to the image; the repository in Docker Hub will use the same name. You can have multiple versions of the same image in a repository by adding a version to it.

# docker build . --tag <repository>/<image-name>:<version>
$ docker build . --tag "andretorrescodurance/realworld:0.1" 

So we can build the image now, and when everything is set you can push using

# docker push <repository>:<version>
$ docker push andretorrescodurance/realworld:0.1

If you don’t add the repository before the image name, you might have trouble pushing the image to Docker Hub.

Now instead of building from scratch, you can just use the image from Docker Hub in your docker-compose.yml and run without sending the source files anywhere.

version: '3.1'

services:

  db:
    image: postgres:10
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: realworld

  realworld:
    image: andretorrescodurance/realworld:0.1
    ports:
      - "4321:4321"
    environment:
      DB_HOST: "db"
    depends_on:
      - db
    volumes:
      - "./scripts:/scripts"
    command: ["/scripts/wait-for-db.sh", "java", "-jar", "realworldkata-1.0-SNAPSHOT-all.jar"]

And with that change done, and the containers working, we can end this post.

Sources:

https://docs.docker.com/engine/reference/builder/

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

https://docs.docker.com/compose/compose-file/

Introduction to Test Doubles

When you are writing unit test you are faced with many kinds of collaborators and they all have very specific behaviours, knowing which test double you have to use in the right time can make your life easier.

Dummy

The first one is the Dummy Object, it’s the simplest one, a Dummy is just an object that you pass to satisfy a constructor, it will not have any method implemented and it shouldn’t.

When we are testing a class we don’t want to do anything with the logger, so what do we do?

For example, there’s this PaymentService that has a logger:

public interface Logger {
    void append(String text);
}
public class PaymentService {

    private Logger logger;

    public PaymentService(Logger logger) {
        this.logger = logger;
    }

    public PaymentRequest createPaymentRequest(Sale sale, CreditCard creditCard) {
        logger.append("Creating payment for sale " + sale.toString());
        throw new UnsupportedOperationException();
    }
}

To starting writing the test we have to satisfy the dependency of the Logger class, but the real implementation isn’t good for the unit tests, the logs will probably save into a text file or send the log to somewhere else, this breaks the isolation of the test, also we don’t want to check anything from the logs, they have nothing to do with the business logic that we have, so we are going to implement a Dummy for it.

public class LoggerDummy implements Logger {

    @Override
    public void append(String text) {}
}

Is that? There’s no code inside the Dummy. For this case we don’t need any kind of implementation inside, and we are ready to write the test.

class PaymentServiceShould {

    @Test
    void create_payment_request() {
        LoggerDummy loggerDummy = new LoggerDummy();
        Customer customer= new Customer("name", "address");
        Item item = new Item("item", 1000);
        List<Item> items= asList(item);
        Sale sale = new Sale(customer, items);
        CreditCard creditCard = new CreditCard(customer, "1");

        PaymentService paymentService = new PaymentService(loggerDummy);
        PaymentRequest actual = paymentService.createPaymentRequest(sale, creditCard);
        assertEquals(new PaymentRequest(1000, "1"), actual);
    }
}

Stubs

Stubs are a bit more complex, they provide canned answers for our calls, they still don’t have any logic but they will not throw an error, instead they return a pre-defined value.

When you are testing, you will want for your tests to be deterministic and repeatable, so they will not stop working after some time because of a change in a collaborator.

Now the PaymentRequest has to contain the credit card operator fee, the rate of this fee is defined by the credit card operator, which is defined by the first four digits of the card.To implement this you have to create a stub and add the necessary changes to the PaymentService. The first step would be implementing the interface that we need for our stub and production code, this is the part that you do some design upfront, thinking about what should be the parameters in your stub and what should be returned, don’t think about the internal implementation, but the contract that you have with that collaborator:

public interface OperatorRate {
    int feeRate(String operator)
}

With the interface defined we can start to write the stub:

public class OperatorRateStub implements OperatorRate {
    private int rate;

    public OperatorRateStub(int rate){

        this.rate = rate;
    }
    @Override
    public int feeRate(String operator) {
        return rate;
    }
}

The stub will always return the value that is passed in the constructor and we have full control of the stub and it’s completely isolated from the production code. Now, the test code is implemented

@Test
void create_payment_request() {
    LoggerDummy loggerDummy = new LoggerDummy();
    Customer customer= new Customer("name", "address");
    Item item = new Item("item", 1000);
    List<Item> items= asList(item);
    Sale sale = new Sale(customer, items);
    CreditCard creditCard = new CreditCard(customer, "1");

    OperatorRate operatorRate = new OperatorRateStub(10);
    PaymentService paymentService = new PaymentService(loggerDummy, operatorRate);
    PaymentRequest actual = paymentService.createPaymentRequest(sale, creditCard);
    assertEquals(new PaymentRequest(1000, "1", 100), actual);
}

Mocks

Mocks are objects that you can say what they are expecting to receive. They are used to verify the behaviour between the System Under Test and its collaborators.

You set your expectations, call the method of the SUT and verify if the method was called at the end.

Moving forward with our system that we are maintaining, there’s a new User Story for us to complete, the customer wants that for every PaymentRequest over 1000 pound an email is sent to the administration. There are two reasons for isolating the email sending:

  • Sending emails is an activity that talk to the outside world, we can’t have a email sent every time we run our tests, this would slow down the tests and would be really annoying.
  • The PaymentService should not be aware of the implementation of the email sender, mixing those two things would create coupling and making it harder to maintain the service or to change how we send emails, that’s why the email sender gets a service by itself.

The steps to that we need to follow are:

  • Create an interface
  • Create a mock implementing the interface
  • Write our test

The interface:

public interface PaymentEmailSender {
    void send(PaymentRequest paymentRequest);
}

Then we have to implement our mock:

public class PaymentServiceMock implements PaymentEmailSender {

    private List<PaymentRequest> paymentRequestSent = new ArrayList<>();
    private List<PaymentRequest> expectedPaymentRequest = new ArrayList<>();

    @Override
    public void send(PaymentRequest paymentRequest) {
        paymentRequestSent.add(paymentRequest);
    }

    public void expect(PaymentRequest paymentRequest) {
        expectedPaymentRequest.add(paymentRequest);
    }

    public void verify() {
        assertEquals(paymentRequestSent, expectedPaymentRequest);
    }
}

This is a very simple mock object, but it will do the work, we implement the interface that we just created, and we make the send method store the PaymentRequest and we add two methods to setup the mock, expect and verify, the verify method uses jUnit assertEqual method to compare the expected value to the one passed by the SUT.

We write the test for the new user story:

@Test
void send_email_to_the_administration_if_sale_is_over_1000() {
    EmailSenderMock emailSender = new EmailSenderMock();
    LoggerDummy loggerDummy = new LoggerDummy();
    OperatorRate operatorRate = new OperatorRateStub(10);
    PaymentService paymentService = new PaymentService(loggerDummy, operatorRate, emailSender);
        PaymentRequest paymentRequest = new PaymentRequest(1000, "1", 100);
    Customer customer= new Customer("name", "address");
    Item item = new Item("item", 1000);
    List<Item> items = asList(item);
    Sale sale = new Sale(customer, items);
    CreditCard creditCard = new CreditCard(customer, "1");

    paymentService.createPaymentRequest(sale, creditCard);

    emailSender.expect(paymentRequest);
    emailSender.verify();
}

and the result of the test is:

org.opentest4j.AssertionFailedError: 
Expected :[]
Actual   :[PaymentRequest{total=2500, cardNumber='1234123412341234', gatewayFee=250}]

Then we move to implement the production code:

    public class PaymentService {
    
        private Logger logger;
        private OperatorRate operatorRate;
        private final EmailSender emailSender;
    
        public PaymentService(Logger logger, OperatorRate operatorRate, EmailSender emailSender) {
            this.logger = logger;
            this.operatorRate = operatorRate;
            this.emailSender = emailSender;
        }
    
        public PaymentRequest createPaymentRequest(Sale sale, CreditCard creditCard) {
            logger.append("Creating payment for sale: " + sale);
    
            int feeRate = operatorRate.feeRate(creditCard.cardNumber);
            int fee = (feeRate * sale.total()) / 100;
    
            PaymentRequest paymentRequest = new PaymentRequest(sale.total(), creditCard.cardNumber, fee);
    
            if (sale.total() >= 1000) {
                emailSender.send(paymentRequest);
            }
            return paymentRequest;
        }
    }

Tests passing and we are done with our story.

Spy

Think of a spy like someone that it’s infiltrated in your SUT and is recording his every move, just like a movie spy. Unlike mocks, the spy is silent, its up to you to assert based on the data that he provides.

You use spies when you are not really sure about what your SUT will call from your collaborator, so you record everything and assert if the spy called the desired data.

For this example we can use the same interface that we created for the mock and implement a new test with our spy.

public class PaymentEmailSpy implements PaymentEmailSender {

    private List<PaymentRequest> paymentRequests = new ArrayList<>();

    @Override
    public void send(PaymentRequest paymentRequest) {
        paymentRequests.add(paymentRequest);
    }

    public int timesCalled() {
        return paymentRequests.size();
    }

    public boolean calledWith(PaymentRequest paymentRequest) {
        return paymentRequests.contains(paymentRequest);
    }
}

The implementation of the Spy is close to the mock, but instead of giving the calls that we are expecting we just record the behaviour of the class, then we go for the test and we can assert what we need there.

class PaymentServiceShould {

    private OperatorRate operatorRate;
    private EmailSenderMock emailSender;
    private PaymentService paymentService;
    private LoggerDummy loggerDummy;
    public static final Customer BOB = new Customer("Bob", "address");
    public static final Item IPHONE = new Item("iPhone X", 1000);
    public static final CreditCard BOB_CREDIT_CARD = new CreditCard(BOB, "1");

    @BeforeEach
    void setUp() {
        loggerDummy = new LoggerDummy();
        operatorRate = new OperatorRateStub(10);
        emailSender = new EmailSenderMock();
        paymentService = new PaymentService(loggerDummy, operatorRate, emailSender);
    }

    
    @Test
    void not_send_email_for_sales_under_1000() {
        Item iphoneCharger = new Item("iPhone Charger", 50);
        Sale sale = new Sale(BOB, asList(iphoneCharger));
        EmailSenderSpy emailSpy = new EmailSenderSpy();
        PaymentService spiedPaymentService = new PaymentService(loggerDummy, operatorRate, emailSpy);

        spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

        assertEquals(0, emailSpy.timesCalled());
    }
}

Fakes

We create a PaymentService with the spy, make the necessary calls and then we can assert based on the data provided by the spy.

Fakes are different from all the other examples that we had, instead of canned responses or just recording calls, they have a simplified version of the business logic.

An example of a Fake would be a InMemory repository where we have the logic to store, retrieve and even do some queries, but it won’t have a real database behind, in fact everything can be stored in a list, or you can fake a external service like an API.

In this case we could create a fake to simulate the API that connects to the payment gateway and use to test our production implementation of the OperatorRate.

In this case our production implementation will send a Json to the gateway with the credit card operator and will receive a Json back with the rate, then will do the proper parsing and the return the value that is in the Json.

So we start writing the test for CreditCardRate class that implements the OperatorRate

public class CreditCardRateShould {

    @Test
    void return_rate_for_credit_card_payment() {
        PaymentGateway fakeCreditCardGateway = new FakeCreditCardGateway();
        CreditCardRate creditCardRate = new CreditCardRate(fakeCreditCardGateway);
        String operator = "1234123412341234";

        int result = creditCardRate.feeRate(operator);

        assertEquals(10, result);
    }
}

The class that is being tested talks to a external service, this service is faked by FakeCreditCardGateway.

The fake Gateway is parsing Json and applying some really simple logic and returning another Json.

public class FakeCreditCardGateway implements PaymentGateway {
    @Override
    public String rateFor(String cardOperator) {
        String operator = parseJson(cardOperator);

        int rate = 15;

        if (operator.startsWith("1234")) {
            rate = 10;
        }

        if (operator.startsWith("1235")) {
            rate = 8;
        }

        return jsonFor(rate);
    }

    private String jsonFor(int rate) {
        return new JsonObject()
                .add("rate", rate)
                .toString();
    }

    private String parseJson(String cardOperator) {
        JsonObject payload = Json.parse(cardOperator).asObject();
        return payload.getString("operator", "");
    }
}

and finally there is the production code for the CreditCardRate class

public class CreditCardRate implements OperatorRate {
    private PaymentGateway paymentGateway;

    public CreditCardRate(PaymentGateway paymentGateway) {
        this.paymentGateway = paymentGateway;
    }

    @Override
    public int feeRate(String operator) {

        String payload = jsonFor(operator);

        String rateJson = paymentGateway.rateFor(payload);

        return parse(rateJson);
    }

    private int parse(String rateJson) {
        return Json.parse(rateJson).asObject()
                .getInt("rate", 0);
    }

    private String jsonFor(String operator) {
        return new JsonObject()
                .add("operator", operator)
                .toString();
    }
}

With this fake we can test if the Json that we are sending to the gateway is right, have some logic so the fake gateway can answer different rates, and finally we can test if we are parsing the response Json properly.

This is a very ad-hoc implementation without having to deal with an HTTP request, but we can have an idea of how this would translate to the real world. If you want to write integration tests make real HTTP calls you can take a look into things like WireMock, and mockingjay-server.

Mockito and the duck syndrome

Not only Mockito but most mocking frameworks have this duck syndrome where they can do multiple things, a duck can swim, fly, and walk. Those frameworks works has dummies, mocks, spies and stubs.

So how we know what we are using when mocking with a framework? To help with that we are going to use the tests that were written with the manual test doubles and refactor them to use Mockito.

class PaymentServiceShould {

    private OperatorRate operatorRate;
    private EmailSenderMock emailSender;
    private PaymentService paymentService;
    private LoggerDummy loggerDummy;
    public static final Customer BOB = new Customer("Bob", "address");
    public static final Item IPHONE = new Item("iPhone X", 1000);
    public static final CreditCard BOB_CREDIT_CARD = new CreditCard(BOB, "1");

    @BeforeEach
    void setUp() {
        loggerDummy = new LoggerDummy();
        operatorRate = new OperatorRateStub(10);
        emailSender = new EmailSenderMock();
        paymentService = new PaymentService(loggerDummy, operatorRate, emailSender);
    }

    @Test
    void create_payment_request() {
        Sale sale = new Sale(BOB, asList(IPHONE));

        PaymentRequest actual = paymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

        assertEquals(new PaymentRequest(1000, "1", 100), actual);
    }

    @Test
    void send_email_to_the_administration_if_sale_is_over_1000() {
        Sale sale = new Sale(BOB, asList(IPHONE));

        paymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

        emailSender.expect(new PaymentRequest(1000, "1", 100));
        emailSender.verify();
    }

    @Test
    void not_send_email_for_sales_under_1000() {
        Item iphoneCharger = new Item("iPhone Charger", 50);
        Sale sale = new Sale(BOB, asList(iphoneCharger));
        EmailSenderSpy emailSpy = new EmailSenderSpy();
        PaymentService spiedPaymentService = new PaymentService(loggerDummy, operatorRate, emailSpy);

        spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

        assertEquals(0, emailSpy.timesCalled());
    }

    @Test
    void send_email_to_hmrs_for_sales_over_10_thousand() {
        Item reallyExpensiveThing = new Item("iPhone Charger", 50000);
        Sale sale = new Sale(BOB, asList(reallyExpensiveThing));
        EmailSenderSpy emailSpy = new EmailSenderSpy();
        PaymentService spiedPaymentService = new PaymentService(loggerDummy, operatorRate, emailSpy);

        spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

        assertEquals(2, emailSpy.timesCalled());
    }
}

Dummy

When you create a Mockito mock the object is a Dummy, it don’t have any behaviour attached, so we can start refactoring the tests and change the LoggerDummy to use a Mockito object.

    class PaymentServiceShould {

        private OperatorRate operatorRate;
        private EmailSenderMock emailSender;
        private PaymentService paymentService;
-    private LoggerDummy loggerDummy;
+    private Logger logger;
        public static final Customer BOB = new Customer("Bob", "address");
        public static final Item IPHONE = new Item("iPhone X", 1000);
        public static final CreditCard BOB_CREDIT_CARD = new CreditCard(BOB, "1");

        @BeforeEach
        void setUp() {
-        loggerDummy = new LoggerDummy();
+        logger = mock(Logger.class);
            operatorRate = new OperatorRateStub(10);
            emailSender = new EmailSenderMock();
-        paymentService = new PaymentService(loggerDummy, operatorRate, emailSender);
+        paymentService = new PaymentService(logger, operatorRate, emailSender);
        }

        @Test
@@ -48,7 +49,7 @@ class PaymentServiceShould {
            Item iphoneCharger = new Item("iPhone Charger", 50);
            Sale sale = new Sale(BOB, asList(iphoneCharger));
            EmailSenderSpy emailSpy = new EmailSenderSpy();
-        PaymentService spiedPaymentService = new PaymentService(loggerDummy, operatorRate, emailSpy);
+        PaymentService spiedPaymentService = new PaymentService(logger, operatorRate, emailSpy);

            spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

@@ -60,7 +61,7 @@ class PaymentServiceShould {
            Item reallyExpensiveThing = new Item("iPhone Charger", 50000);
            Sale sale = new Sale(BOB, asList(reallyExpensiveThing));
            EmailSenderSpy emailSpy = new EmailSenderSpy();
-        PaymentService spiedPaymentService = new PaymentService(loggerDummy, operatorRate, emailSpy);
+        PaymentService spiedPaymentService = new PaymentService(logger, operatorRate, emailSpy);

            spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

All tests are passing and we don’t have to use the LoggerDummy implementation that we had.

Stubs

Now we have to start to give some behaviour to our mocks, and following the same order from our manual test doubles, we have to transform the Mockito object into a stub, for that Mockito has the given() method where we can set a value to be returned.

For primitives Mockito returns 0, null for Objects, and a empty collection for collections like List, Map, or Set.

The given() works in the following way:

given(<method to be called>).willReturn(returnValue);

and we change the implementation in our tests.

    import static java.util.Arrays.asList;
    import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.BDDMockito.given;
    import static org.mockito.Mockito.mock;

@@ -20,9 +22,10 @@ class PaymentServiceShould {
        @BeforeEach
        void setUp() {
            logger = mock(Logger.class);
-        operatorRate = new OperatorRateStub(10);
+        operatorRate = mock(OperatorRate.class);
            emailSender = new EmailSenderMock();
            paymentService = new PaymentService(logger, operatorRate, emailSender);
+        given(operatorRate.feeRate(BOB_CREDIT_CARD.cardNumber)).willReturn(10);
    }

Now the mock is acting like a stub and the tests are passing.

Mocks and Spies

In the previous test that we created, we are still using the PaymentEmailMock that we created, now we can change that for the one in Mockito.

@@ -8,11 +8,12 @@ import static org.junit.jupiter.api.Assertions.assertEquals;
    import static org.mockito.ArgumentMatchers.anyString;
    import static org.mockito.BDDMockito.given;
    import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;

    class PaymentServiceShould {

        private OperatorRate operatorRate;
-    private EmailSenderMock emailSender;
+    private EmailSender emailSender;
        private PaymentService paymentService;
        private Logger logger;
        public static final Customer BOB = new Customer("Bob", "address");
@@ -23,7 +24,7 @@ class PaymentServiceShould {
        void setUp() {
            logger = mock(Logger.class);
            operatorRate = mock(OperatorRate.class);
-        emailSender = new EmailSenderMock();
+        emailSender = mock(EmailSender.class);
            paymentService = new PaymentService(logger, operatorRate, emailSender);
            given(operatorRate.feeRate(BOB_CREDIT_CARD.cardNumber)).willReturn(10);
        }
@@ -43,8 +44,8 @@ class PaymentServiceShould {

            paymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

-        emailSender.expect(new PaymentRequest(1000, "1", 100));
-        emailSender.verify();
+        PaymentRequest paymentRequest = new PaymentRequest(1000, "1", 100);
+        verify(emailSender).send(paymentRequest);
        }

All tests are passing, that’s great, but there’s a difference between the stub from Mockito and the one that we created. This time we didn’t have to specify what we were expect, we went straight to the verify step. That’s Mockito taking multiple roles again, a mock created by Mockito will record all the received calls like a Spy.

We still have the tests that are using the spy, we can change the tests to only use mockito.

class PaymentServiceShould {
        void not_send_email_for_sales_under_1000() {
            Item iphoneCharger = new Item("iPhone Charger", 50);
            Sale sale = new Sale(BOB, asList(iphoneCharger));
-        EmailSenderSpy emailSpy = new EmailSenderSpy();
-        PaymentService spiedPaymentService = new PaymentService(logger, operatorRate, emailSpy);

-        spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);
+        paymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

-        assertEquals(0, emailSpy.timesCalled());
+        verify(emailSender, never()).send(any(PaymentRequest.class));
        }

        @Test
        void send_email_to_hmrs_for_sales_over_10_thousand() {
            Item reallyExpensiveThing = new Item("iPhone Charger", 50000);
            Sale sale = new Sale(BOB, asList(reallyExpensiveThing));
-        EmailSenderSpy emailSpy = new EmailSenderSpy();
-        PaymentService spiedPaymentService = new PaymentService(logger, operatorRate, emailSpy);

-        spiedPaymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);
+        paymentService.createPaymentRequest(sale, BOB_CREDIT_CARD);

-        assertEquals(2, emailSpy.timesCalled());
+        PaymentRequest paymentRequest = new PaymentRequest(50000, "1", 5000);
+        verify(emailSender, times(2)).send(paymentRequest);
        }
    }

verify has multiple modifiers like:

  • atLeast(int)
  • atLeastOnce()
  • atMost(int)
  • times(int)

Again we have the mock object having multiple function, this time has a Mock and a Spy.

What about Fakes?

Fakes are objects with logic inside, we can’t have them using Mockito, but that’s not a problem, for most cases you will not need a Fake, usually Fakes tend to grow and you will end having tests to see if your Fake is behaving correctly.

As Uncle Bob says is his post “The Little Mocker”:

Yes, Hmmm. I don’t often write fakes. Indeed, I haven’t written one for over thirty years.

Good Practices and smells.

CQS, Stubs and Mocks

If you are not familiar with CQS go ahead and read those:

OO Tricks: The Art of Command Query Separation

bliki: CommandQuerySeparation

A good rule of thumb for deciding where to use stubs and mocks is to follow the Command Query Separation principle, where you have:

Commands

  • They don’t have return values
  • Used to mutate data inside your class.
  • Use verify() when mocking with Mockito.

Queries

  • Is to query data from the class
  • Don’t create any side effect
  • Just returns data.
  • Use given() when mocking with Mockito

Only Mock/Stub classes you own

One thing that we have to understand about mocking, is that isn’t only about testing, but about designing how our SUT works with its collaborators, it’s going to be hard to find an application where you will not use a third party library, but this doesn’t mean that you have to mock them, in fact you should never do that. The main thing of mocking third party libraries is that you are subject of their changes, a change of signature would break all your tests mocking that.

The solution? Writing a thin wrapper around that library, using mocking tools you can design a thin wrapper that receives and return only the necessary information, but how do we test our wrappers?

In this case the wrappers can be tested depending the dependency that you have, if you have a wrapper for a database layer you can have integration tests in another source set, so you can run your unit tests without having to worry about the integration tests slowing you down.

Don’t mock data structures.

When you have your own data structures you don’t have to mock it, you can simply instantiate with the data that you need, case the data structure is hard to instantiate or you need multiple objects you can use the Builder pattern.

You can learn about the Builder pattern here.

Make your tests minimalists

When testing with mock objects it’s important to not make your tests too brittle, it’s important that you can refactor your code base without your tests being an annoyance, if something like this is happening you might have over-specified things to check with your mocks, and if this happens in multiple tests it ends up slowing the development. The solution is to re-examine the code and see if the specification or code has to be changed.

Imagine that instead of using a Dummy for the logger in the example at the beginning a mock were used. Then the mock would be verifying all the messages that the logger pass and changing anything would break the test. No one wants to have their tests breaking just because they fixed a typo in the logs.

Don’t use mocks/stubs to test boundary/isolated objects

Objects that don’t have collaborators don’t have to be tested with mock objects, an object like that just need assertions in the values that returns or that are stored. Sounds a bit obvious, but it’s good to reinforce that.

For a dependency like a JSON parser you can test the wrapper with the real dependency working. You can see this in action in the example for the Fake, instead of mocking the Json library, the real one was used, something like a wrapper to do the conversion could be used, then we would have to test the wrapper with the real Json library and see if the json created is right, in this case we would never mock that dependency.

Don’t add behaviour

Mocks are test doubles, and you should not be adding complexity to your test doubles, you have fakes that contain some logic, but besides that, none of the test double should contain logic, this is a symptom that you misplaced responsibilities.

An example of this problem would be a mock that returns another mock, if you have something like a service that returns another service you might want to take a second look at the design of your application.

Only mock/stub your immediate neighbours

A complex object that might have multiple dependencies might be hard to test, and one symptom that we can see from this is that the setup for the test is complex, and the test is also hard to read. Unit tests should be focused to test one thing at the time and should only set expectations for their neighbours (think of Law of Demeter). You might have to introduce a role to bridge the object and its surroundings.

Too Many mocks/stubs

Your SUT might have multiple collaborators, and your tests start to get more complex to set up and hard to read, like in the other situations that we saw, the SUT might have too many responsibilities, to solve that you would have to break your object into smaller ones more focused.

So if you have a service with multiple classes in the constructor like:

public ReadCommand(UserRepository userRepository, MessageRepository messageRepository, 
                    MessageFormatter messageFormatter, Console console, String username) {
    this.userRepository = userRepository;
    this.messageRepository = messageRepository;
    this.messageFormatter = messageFormatter;
    this.console = console;
    this.username = username;
}

You can refactor this to become:

public ReadCommand(UserRepository userRepository, MessageRepository messageRepository, 
                                        MessagePrinter messagePrinter, String username) {
    this.userRepository = userRepository;
    this.messageRepository = messageRepository;
    this.messagePrinter = messagePrinter;
    this.username = username;
}

Now the MessagePrinter has the MessageFormatter and the Console working together, so when you test the ReadCommand class you just have to verify if the method to print was called.

Generating code with ASP.Net Core

One of the things that I really like in rails is tha hability to generate files using the scaffolding through the CLI, and recently I’ve started to learn ASP.Net Core.

The .Net Core is playing nice with the CLI using dotnet to create projects, manage migrations and packages, and I could even Debug applications using VS Code, but one thing was missing, the scaffolder for the views and controllers, in Rails even without scaffolding was fast to create an controller, but now I had to deal with namespaces and imports to create a simple controller. But let’s stop with the story time and see something useful.

Creating a ASP.Net Core project

Obviously that the first thing is to download .Net Core from https://www.microsoft.com/net/core

With the CLI installed we can start our new project

mkdir ASPBlog
cd ASPBlog
dotnet new mvc
dotnet restore

And now to start generate code for your ASP.NET Core application you first need to add three dependencies in your .csproj file.

The first one is the Fallback to the dotNet Framework because not every package is working with .Net Core, I’m hoping that this will change with Core 2.0.

<PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
</PropertyGroup>

Then you add the following package:

<PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="1.1.1" />

And finally you add the Cli Tool:

<ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="1.0.0" />    
 </ItemGroup> 

And the final step is to execute a dotnet restore, And now you can check if the generator is up and running with dotnet aspnet-codegenerator

Usage: dotnet aspnet-codegenerator --project [projectname] [code generator name]

Code Generators:
view
controller
area

Try dotnet aspnet-codegenerator --project [projectname] [code generator name] -? for help about specific code generator.
RunTime 00:00:06.74

Controllers

For the Controller scaffolder we have the following commands:

Options:
  --help|-h|-?                         Show help information
  --useAsyncActions|-async             Switch to indicate whether to generate async controller actions
  --noViews|-nv                        Switch to indicate whether to generate CRUD views
  --controllerName|-name               Name of the controller
  --restWithNoViews|-api               Specify this switch to generate a Controller with REST style API, noViews is assumed and any view related options are ignored
  --readWriteActions|-actions          Specify this switch to generate Controller with read/write actions when a Model class is not used
  --model|-m                           Model class to use
  --dataContext|-dc                    DbContext class to use
  --referenceScriptLibraries|-scripts  Switch to specify whether to reference script libraries in the generated views
  --layout|-l                          Custom Layout page to use
  --useDefaultLayout|-udl              Switch to specify that default layout should be used for the views
  --force|-f                           Use this option to overwrite existing files

Generating our first controller

Let’s start by creating an empty controller for our static pages like About, and Contact. Run the following command: dotnet aspnet-codegenerator controller -name StaticPagesController -outDir Controllers

If everything went alright you should have a new file called StaticPagesController.cs and the file look like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;

namespace ASPBlog.Controllers
{
    public class StaticPagesController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

The generator created an controller with all usings, namespace and Index action. But what are the arguments the was used? The -name parameter defines the name of the controller and -outDir set the folder that the controller will be created.In case you don’t use -outDir the controller will be created in the root of the project. With the controller created all is needed is to add the About and Contact actions by adding the following code to the class:

        public IActionResult Contact()
        {
            return View();
        }

        public IActionResult About()
        {
            return View();
        }

And finally we can create an view to test our code. Create the necessary files and folders of the path Views/StaticPages/About.cshtml and add:

@{
    ViewData["Title"] = "About";
}

Normally, both your asses would be dead as fucking fried chicken, but you happen to pull this shit while I'm in a transitional period so I don't wanna kill you, I wanna help you. 
But I can't give you this case, it don't belong to me. Besides, I've already been through too much shit this morning over this case to hand it over to your dumb ass.

Now that we know who you are, I know who I am. I'm not a mistake! It all makes sense! 
In a comic, you know how you can tell who the arch-villain's going to be? He's the exact opposite of the hero. 
And most times they're friends, like you and me! I should've known way back when... You know why, David? Because of the kids. 
They called me Mr Glass.

We can check the result by running the ASP.Net server with dotnet run and visiting http://localhost:5000/StaticPages/About.

Controller with CRUD

We can do more, some controllers are simple crud controllers, with an context and some actions. We can create an controller for our posts with everything we need using only the aspnet-codegenerator.

First we need a Model and a Database. In this case let’s use SQLite and a Post model with title and body.

Post Model

Not the HTTP post, is the one we do in blogs. We create the file Models/Post.cs

using System.ComponentModel.DataAnnotations;

namespace ASPBlog.Models
{
    public class Post
    {
        public int Id { get; set; }
        public string Title { get; set; }

        [DataType(DataType.MultilineText)]
        public string Body { get; set; }
    }
}

SQLite database

Add the following packages to your .csproj file:

<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.2" />

<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.0" />

and finally restore the packages with dotnet restore. Now we create a Context for EntityFramework so we can persist our model.

Create the file Context/BlogContext.cs:

using Microsoft.EntityFrameworkCore;
using ASPBlog.Models;

namespace ASPBlog.Context {
    public class BlogContext : DbContext
    {
        public BlogContext(DbContextOptions<BlogContext> options) : base(options) 
        {}

        public DbSet<Post> Posts { get; set; }
    }
}

and in the Startup.cs we need add EntityFrameworkCore, and Context namespace also we need to configure our database.

using Microsoft.EntityFrameworkCore;
using ASPBlog.Context;

...

        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();
            services.AddDbContext<BlogContext>(options => 
                options.UseSqlite("Data Source=ASPBlog.db") 
            );
        }
		
...

With our Model and Database configurated we just need to run the commands to create the migrations and generate the database.

dotnet ef migrations add "Initial Commit"
dotnet ef database update

Generating a CRUD for Post

Now that everyting is created and in place we can generate our PostsController. Run the following command:

dotnet aspnet-codegenerator controller -name PostsController -outDir Controllers -m Post -dc BlogContext

Those are the new parameters that we used:

  • -m: It’s the Model that we want to use to create the actions in the controller.
  • -dc: this is the DataContext parameter. We are doing CRUD operations, so we have to look in a database and the Context

Now let’s take a look at the controller that we generated:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Rendering;
using Microsoft.EntityFrameworkCore;
using ASPBlog.Context;
using ASPBlog.Models;

namespace ASPBlog.Controllers
{
    public class PostsController : Controller
    {
        private readonly BlogContext _context;

        public PostsController(BlogContext context)
        {
            _context = context;    
        }

        // GET: Posts
        public async Task<IActionResult> Index()
        {
            return View(await _context.Posts.ToListAsync());
        }

        // GET: Posts/Details/5
        public async Task<IActionResult> Details(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var post = await _context.Posts
                .SingleOrDefaultAsync(m => m.Id == id);
            if (post == null)
            {
                return NotFound();
            }

            return View(post);
        }

        // GET: Posts/Create
        public IActionResult Create()
        {
            return View();
        }

        // POST: Posts/Create
        // To protect from overposting attacks, please enable the specific properties you want to bind to, for 
        // more details see http://go.microsoft.com/fwlink/?LinkId=317598.
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create([Bind("Id,Title,Body")] Post post)
        {
            if (ModelState.IsValid)
            {
                _context.Add(post);
                await _context.SaveChangesAsync();
                return RedirectToAction("Index");
            }
            return View(post);
        }

        // GET: Posts/Edit/5
        public async Task<IActionResult> Edit(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var post = await _context.Posts.SingleOrDefaultAsync(m => m.Id == id);
            if (post == null)
            {
                return NotFound();
            }
            return View(post);
        }

        // POST: Posts/Edit/5
        // To protect from overposting attacks, please enable the specific properties you want to bind to, for 
        // more details see http://go.microsoft.com/fwlink/?LinkId=317598.
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Edit(int id, [Bind("Id,Title,Body")] Post post)
        {
            if (id != post.Id)
            {
                return NotFound();
            }

            if (ModelState.IsValid)
            {
                try
                {
                    _context.Update(post);
                    await _context.SaveChangesAsync();
                }
                catch (DbUpdateConcurrencyException)
                {
                    if (!PostExists(post.Id))
                    {
                        return NotFound();
                    }
                    else
                    {
                        throw;
                    }
                }
                return RedirectToAction("Index");
            }
            return View(post);
        }

        // GET: Posts/Delete/5
        public async Task<IActionResult> Delete(int? id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var post = await _context.Posts
                .SingleOrDefaultAsync(m => m.Id == id);
            if (post == null)
            {
                return NotFound();
            }

            return View(post);
        }

        // POST: Posts/Delete/5
        [HttpPost, ActionName("Delete")]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> DeleteConfirmed(int id)
        {
            var post = await _context.Posts.SingleOrDefaultAsync(m => m.Id == id);
            _context.Posts.Remove(post);
            await _context.SaveChangesAsync();
            return RedirectToAction("Index");
        }

        private bool PostExists(int id)
        {
            return _context.Posts.Any(e => e.Id == id);
        }
    }
}

With one line of code we have an controller with all actions for a CRUD using async and await, but wait, there is more. The code generator also created an folder with all views for this controller, look at the Views/Posts folder. And finally visit http://localhost:5000/Posts/Create to see the result.

That’s it people, thanks.

Mocking with mockito

When running unit tests you might have to interact with another classes, a class that calls your database or do some calculation over your data but you want to test in isolation. How to do this? Mocking those classes can be the solution, the mockito enters in the scene.

We have this InvoiceService with two dependencies that are injected in the constructor. In this case we want to test in fully isolation so we can’t really call any methods from those dependencies. So how we can test without instanciating those classes?

public class InvoiceService {
    private InvoiceDao dao;
    private Mailer mailer;

    public InvoiceService(InvoiceDao dao, Mailer mailer) {
        this.dao = dao;
        this.mailer = mailer;
    }
	
    public void confirmCustomerInvoices(Customer customer) {
        List<Invoice> invoices = dao.customerOpenInvoices(customer);
        invoices.forEach(invoice -> {
            invoice.setConfirmed(true);
            dao.save(invoice);
            mailer.confirmationEmail(invoice);
        });
    }
}

Creating mocks

First we create our test and the setup, in the setup we instanciate the dependencies with mockito’s mock() method:

public class InvoiceServiceTest  {

    private Customer customer;
    private Mailer mailer;
    private InvoiceDao dao;
    private InvoiceService service;

    @Before
    public void setup() {
        customer = new Customer();
        customer.setName("Sterling Archer");
        
        mailer = mock(Mailer.class);
        dao = mock(InvoiceDao.class);
        service = new InvoiceService(dao,mailer);
    }
}

The mock() method creates a instance that is manageable se we can explicity say what results the methods have. Now we are going to start our tests and see how we can use those mocks that we’ve created.

Crafting responses

Now we start to write our first test case. We are going to test the confirmCustomerInvoice method. If you check the method, you can see that the first this that the method does is to call the dao and search for the open invoices, but we are not using any database, so how our dao will return something? That’s Mockito’s when() and thenReturn() methods, so we can say what the dao.customerOpenInvoices(customer); method is going to return.

Follow the example:

@Test
public void confirmOpenInvoicesHasToChangeStatusToTrue() {
    Invoice invoice1 = new Invoice();
    invoice1.setCustomer(customer);

    Invoice invoice2 = new Invoice();
    invoice2.setCustomer(customer);

    Invoice invoice3 = new Invoice();
    invoice3.setCustomer(customer);

    List<Invoice> invoices = Arrays.asList(invoice1,invoice2,invoice3);
        
    when(dao.customerOpenInvoices(customer))
            .thenReturn(invoices);

    service.confirmCustomerInvoices(customer);
    invoices.forEach(invoice -> Assert.assertTrue(invoice.getConfirmed()));
}

We crafted our response by creating 3 invoices and setting them in a list. Then we used when(dao.customerOpenInvoices()) to mockito know when he is going to return something, and .thenReturn(invoices) to say what mockito has to return.

Verifying methods execution

Now we now how to make our mocks to have the desired return we can starting verifying if all methods in our code are being executed. We can do this using the verify method from Mockito. In this example we need to make sure that the confirmCustomerInvoices save the new state on the database and send a email to the client.

@Test
public void itHasToCallSaveAndMail() {
    Invoice invoice = new Invoice();
    invoice.setCustomer(customer);


    when(dao.customerOpenInvoices(customer))
            .thenReturn(Arrays.asList(invoice));

    service.confirmCustomerInvoices(customer);
    verify(dao).save(invoice);
    verify(mailer).confirmationEmail(invoice);
}

The verify method accpets an Object and you can call that object methods to see if they were really called inside the tested method. You can also add the times() argument to make sure that the method is called just once, or how many times you wanted.

Just to know, if you want to check if a method is not executed you can use the never() argument.

Intercepting Objects

Sometimes you are testing something that is inside our class, we cant pass in the constructor or inject, but we have to test it. How we can deal with this kind of stuff? We can use interceptors, so we can retrive the object.

Let’s take a look at our InvoiceService():


public class InvoiceService {

    private InvoiceDao dao;
    private Mailer mailer;
    private TaxDao taxDAO;

    public InvoiceService(InvoiceDao dao, Mailer mailer,TaxDAO taxDAO) {
        this.dao = dao;
        this.mailer = mailer;
        this.taxDAO = taxDAO;
    }

    public void confirmCustomerInvoices(Customer customer) {
        List<Invoice> invoices = dao.customerOpenInvoices(customer);
        invoices.forEach(invoice -> {
            invoice.setConfirmed(true);
            dao.save(invoice);
            mailer.confirmationEmail(invoice);

            Tax tax =  new Tax(invoice);
            taxDAO.save(tax);
        });
    }
}

For every invoice we have an 10% tax, that is calculated by the Tax class and then saved by the TaxDAO. we have to check if the value of the tax is exactly 10% of the invoice value, but we can’t inject a mock of the Tax class, so how we can achieve this? We use an interceptor.

Since we are injecting the TaxDAO and passing the Tax to the dao we can intercept it. Our written test is just like this:

@Test
public void taxHasToBeTenPercentOfInvoiceAmount() {
    Invoice invoice = new Invoice();
    invoice.setCustomer(customer);

    Item item = new Item();
    item.setName("Black Turtleneck");
    item.setValue(new BigDecimal(150.0));

    invoice.addItem(item);
    when(dao.customerOpenInvoices(customer)).thenReturn(Arrays.asList(invoice));

    service.confirmCustomerInvoices(customer);

    //Creating the captor to intercept the Tax object
    ArgumentCaptor<Tax> captor = ArgumentCaptor.forClass(Tax.class);

    //Setting the point where the object will be intercepted
    verify(taxDAO).save(captor.capture());

    //Finally getting the object back
    Tax tax = captor.getValue();
    assertEquals(tax.getValue(),invoice.getTotal() * 0.10,0.001);
}

How does this works? You create an ArgumentCaptor<T> for the calss you want to capture. Then you need to specify the right moment that you want to capture the argument and finally you use the getValue() method to return the object you need.

After this you have an object to work with in your test case.

Exceptions

We want that our service keep working even when something bad happens, thats why testing only happy paths isn’t a good thing, often things that shouldn’t happen will happen. so how we can test exceptions with our mocks?

With Mockito we can create exceptions when we want, so let’s write a test were an exception will be raised:

@Test
public void serviceShouldContinueInCaseOfError() {
    Invoice invoice1 = new Invoice();
    invoice1.setCustomer(customer);

    Invoice invoice2 = new Invoice();
    invoice2.setCustomer(customer);

    Invoice invoice3 = new Invoice();
    invoice3.setCustomer(customer);

    List<Invoice> invoices = Arrays.asList(invoice1,invoice2,invoice3);
    when(dao.customerOpenInvoices(customer)).thenReturn(invoices);

    doThrow(new RuntimeException()).when(dao).save(invoice1);
    service.confirmCustomerInvoices(customer);

    verify(dao).save(invoice2);
    verify(dao).save(invoice3);
    verify(mailer).confirmationEmail(invoice2);
    verify(mailer).confirmationEmail(invoice3);
}

We have the doThrow() method where the exception to be raised is set and the when() method receive the mock that will raise the exception, and finally we call the method that is going to raise.

You can check the entire example here