Test Driven Development (TDD)

Test Driven Development

How do we produce good quality code? We start by writing our tests as early as possible… that is, before writing the code. Why? Because in this way we document exactly what solution we want to build and the requirements our systems should satisfy. Our tests are effectively the documentation for our code and how our system can be used. In this way we build up a safety net of automated regression tests to prevent accidental regressions and increase confidence that we can make changes safely further down the line.

Why should we test drive our solutions?

  • Frequent manual testing is impractical. The only way forward then is automated regression testing.
  • We want to be able to write simple code. This requires constant refactoring for any system of an interesting size. This is the safest way to cope with unanticipated changes.
  • Tests help us prioritise which parts of the code to write first
  • Tests help us identify the communication patterns necessary for our collaborating modules of code
  • Tests also help us to structure our code in a way that makes the modules easy to work with
  • The effort of unit testing helps maintain the internal quality of the code
  • Testing helps to define code that will not only fit together (via their interfaces) but that will also work together (communication protocol)

Incremental development builds a system feature by feature, instead of building all the layers and components and integrating them at the end. Each feature is implemented as an end-to-end slice through all the relevant parts of the system. The system is always integrated and ready for deployment.
Iterative development refines the implemtation of features in response to feedback until they are good enough.

TDD turns testing into a design activity. You end up writing code that makes it easy to substitute dependencies, is easy to work with (because your tests have to work with it) and is simple to implement.

Acceptance/Functional tests tell us about the external quality of the system
Integration tests fall somewhere in the middle of telling us about the internal and external quality of the system.
Unit tests tell us about the internal quality of a system.

Coupling

Two units of code are coupled if a change to one forces a change in the other.

Cohesiveness

Code is cohesive if it works well together and makes sense. TDD helps make code cohesive.

Using TDD we drive out the design of the communication paths/networks necessary to have well functioning/cohesive code. So TDD helps us achieve better communication in our code.

  • TDD front loads stress. Stress occurs at the start of the project not at the end. This is more predictable.
    TDD forces us to ask questions early, whilst theres still time, budget and good will to rectify the issues.

The first test:

Often it’s hard to get a walking skeleton so we usually have to hack some stuff together to get something just about working, then we write a test that we actually want. This enables us to build something that is well designed from the start whilst at the same time forcing us to ask questions early on, and come up with a good design early on.

Experience shows that systems that back-load tests have far more outages and need far more support.

TDD starts with an acceptance test. This should be free of any technical language and specify only the feature we’re attempting to add. This forces us not to be tied on to our assumptions too early. This also shields the system from technical changes that may be more fundamental to the systems operation. e.g. switching from FTP to web services and XML. This ensures the functional test suite is decoupled from the rest of the system tests that operate at a lower level.

acceptance tests represent new features and they should not break moving forwards. A break in an acceptance test means that there has been a regression

Start with the simplest success case. XP maxim states “do the simplest thing”. Often the simplest thing is to write a failng test but this adds little value. We want to constantly be adding value.

Integration Tests vs. Unit Tests

https://reflectoring.io/spring-boot-test/
Before we start into integration tests with Spring Boot, let’s define what sets an integration test apart from a unit test.

A unit test covers a single “unit”, where a unit commonly is a single class, but can also be a cluster of cohesive classes that is tested in combination.

An integration test can be any of the following:

a test that covers multiple “units”. It tests the interaction between two or more clusters of cohesive classes.
a test that covers multiple layers. This is actually a specialization of the first case and might cover the interaction between a business service and the persistence layer, for instance.
a test that covers the whole path through the application. In these tests, we send a request to the application and check that it responds correctly and has changed the database state according to our expectations.

Unit tests

Use mocking to provide predictable responses to methods that form the interface for a given class.

Red Green Refactor:

https://stackoverflow.com/questions/276813/what-is-red-green-testing
1. understand the requirements
2. understand any existing processes
3. (RED) write a single failing test that should pass when the correct solution is built. This will be formed of
given – preconditions
when – method to exercise is called
then – expected outputs
4. (GREEN) Make changes to introduce solution to the test. Test passes.
5. (REFACTOR) Improve overall code quality to improve naming, reduce repetitiveness (DRY – do not repeat yourself), Single Responsibility principle, etc.
6. Run a rebase && build && push (not forced) – this ensures were continuously integrating the changes we’re making in a way that minimizes breaking the build
7. Any problems, fix them, retry step 6.
8. Add any additional tests – go back to 3. – until complete.

Tests exist at all levels of an application. Its important to analyse and understand any existing solution first to understand what tests should change and where new tests should be added in order to introduce a new feature/bug-fix.

From the outermost layer, working in, we have:

  • End to end tests – a set of tests run when an application is deployed to ensure that everything works as it should
  • Smoke tests – a simple set of happy scenario tests that should check all endpoints are working and responding at a basic level when the application is deployed to a given production environment. AKA Shakedown tests
  • Functional/Acceptance tests – run against the application with its production configuration applied – test instances of dependencies are used
  • Integration tests – run one module of code against another. Typically this is the Service Layer -> Data Layer. We use live config, but perhaps third party dependencies are containers/stubs e.g. a Database/Queue
  • Unit tests – run tests on a single module of code. Config can be replaced with non-environment specific values and any dependencies (see constructor for these) can be completely mocked to return fixed/predictable values that will show a breakage if the underlying code/api changes.

Look at an example.

So let’s say we want to introduce a new endpoint that does X… how should we start? First we need unit tests at the data layer. The data layer is the layer that holds CRUD operations. The service layer of your application will call this layer when ever it needs to create/read/update/delete some data from the persistence layer (Blockchain/database/queue, etc.)

TDD creates code snippets for how every part of the system works (when done fully).

So start with a small test – Create data.

Writing tests after you’ve written the coded solution is not fun because you already spent a bunch of time manually debugging and testing it. But eventually you will come to a bit of code that is hard to test. You end up avoiding the test entirely which leaves a hole in your test suite. Then other people observe this so the whole thing falls apart and there’s no good action you can take to fix this.

Something different happens when you write the test first. You write the test and then you make it pass – this works. Its more fun. If you write the test first you make it nearly impossible to write code thats hard to test… because you wrote the function to be testable. This produces code that is much less coupled. When the tests pass it means the system works, so you can ship the changes and there’s nothing left to do.

If you have ever been slowed down by code that is a mess, the question then is why did we write bad code? Are we in a hurry? Its impossible to write good code first. So sometimes we solve the problem first then don’t go back and write the tests. Another reason: How many times have we looked at bad code, said I’ll fix this, then not fixed it. If no one cleans it then it simply rots. People respond in fear because they don’t want to break things.

If we have a test suite we trust, and we see bad code, we can refactor the code without fear. We want you to be in control of the source code.

long test times and long compilation times are detrimental to TDD
Test business rules separately to testing the api.
Test rest api takes the inputs and creates a data structure. Testing anything more than this is worthless.

mocking is important – especially at the architectural boundaries of a system. mocks speed things up enormously.
Don’t be foolishly proud of 100% test coverage. 100% test coverage is impossible. Some behaviours are impossible to test – e.g. GUI testing. This is hard and is done manually. Sometimes you don’t know what the test should be so you write the code first. The vast majority of the code you write you can write a test first.
microservices removes type coupling – this isn’t necessarily good.

UNIT TESTING

Mock any external dependencies

There’s no one way to write a unit test. You might use the

@AutoConfigureMockMvc
 @Autowired private MockMvc mockMvc;

You might use Mockito/PowerMock to create stand-ins for objects that are required for a given objects construction, but that respond in predictable ways to various calls with particular inputs.

@ExtendWith(MockitoExtension.class)

@Mock
private ClientClass clientMock;

@BeforeEach
public void setUp() {
    when(clientMock.methodToCall(any())
}

@Test
void someTestsMayVerifyCallsToOtherMethodsOfTheMock() {
    // when
    String result = clientMock.methodToCall(someStringInput);
    // then
    verify(clientMock, times(2)).aMethodCalledLogMessage();
}


INTEGRATION TESTING

Use a real test Environment that spins up fake versions of servers that we can use to test one class vs another (white box testing). But test the integration against 3rd party dependencies you integrate against – Database/dao/blockchain – spin up dockerised containers if possible to support the testing process (rather than faking parts of the system)

FUNCTIONAL TESTING

Do not mock anything but provide fake implementations of servers – I.e socket servers that are spun up and run as a part of the application (implement import static com.sun.net.httpserver.HttpServer.create;) and can respond (Black box testing) test from the outside – do not verify internal details .

Value Types

Value types are useful because they represent value concepts in the domain we’re working in. Even if they don’t do much, they can help find all code relevant for a given value type rather than chasing through all the method calls. Specific types also reduce the chance of confusion. Finally, when we have value types representing a concept , they can become a good place to add behaviour, guiding us towards a more OO model rather than having scattered code.