Here’s a short account of a realisation I had this morning on the occasional usefulness of mocks in unit tests. I’ve never been a huge fan of mocking in unit tests. The code to set them up is tedious to write, and tends to be very fragile. Any change or refactor you make to the module being tested — even if the behaviour doesn’t change — and all your tests break because the mocks are no longer called the way they were. This goes against one of the key principals of unit testing in my opinion, in that it is the behaviour of the module is being verified, not the actual implementation.

However, today I came across a case where a mock would have come in handy.

I was working on changes to a unit test for a module that interacts with DynamoDB. Because of my natural dislike for mocking, this tests was written to make use of a real client making reads against a “real” DynamoDB database, running in a Docker container.

Today, I had to make a change to the public contract: the addition of a field controlling whether the reads should be made with strong consistency or not. Given the nature of DynamoDB, this is something that only makes sense in a production setting, with DynamoDB replicas running within the vast AWS data-centres spanning the world. But for my lowly unit test, setting this field on my test DynamoDB instance would make no difference at all.

So how can I verify that the module I’m working on would actually make strong consistent reads?

This is my realisation in that you can choose to do thing’s a certain way, but there will always be trade-off. Using a “live” database would mean that much of the mock setup doesn’t need to be written, and that you are actually exercising the code which is making real calls to the database. But when it comes to asserting whether the actual calls made to the AWS client are correct; well, that will require intercepting them, and verifying that the passed in arguments are what you expected. Difficult to do if you are using a real client instead of a mock.