Before I begin, kudos to James Carr's blog post on
Test Driven Development (TDD) antipatterns
which is used as jumping-off point for this blog!. TDD is considered as one of the most
important advancement in software development life cycle. Most of my teachers and seniors
used to explain TDD as lifesaving medicine for any software. However, just like any other
medicine TDD has its own pitfalls and side effects which may muddle any software's health.
Generally, this happens when your software is not taking medicines at timely manner,
although TDD is widely practiced in all sorts of industries with all sorts of different
programming languages (i.e. functional, non-functional, object oriented, imperative,
declarative etc.) Sadly, it is not the starting point of every team! Perhaps, it is one of
the reasons that TDD hasn't become so effective.
As a software designer I believe TDD gives us feedback
on the quality of software design. If practiced correctly and honestly, we may build an
efficient, and proactive assistant which may report valuable insights regarding our design. Such
insights help us to maintain and evolve software over the longer period. This practice
eventually yields ingredients like durability, modularity along with better separation of
concerns and appropriate levels of coupling. When the tests are difficult to write, it's a
signal which is telling us something important about our design. Necessarily, if the attention
is given on the timely manner, better designed code can be capitulated. In this blog I have
discussed some of the signals or traps to watch out for.
This signal refers test cases which runs successfully
but doesn't test what they claim to test or assert anything useful. Such signals are often seen
in teams that are trying to chase test coverage because of somebody higher in the hierarchy
enforce certain level of coverage threshold. Moreover, many organizations incentivize their
developers for achieving such thresholds, however in reality it doesn't return any
value.
Problems:
1. It gives you an illusion of code coverage
which allows you to think that you are safe but actually you are not!!
2. No return on
investment at all! waste of time, efforts, and
money.
Corrections:
1. Delete tests which doesn't assert anything
useful or make them worthy by adding some assertions.
2. Refactor test cases whose code semantics are not aligned with what they claim for.
3. Practice Test-First Test Driven Development!
Practicing Test First TDD properly and honestly can be hard and difficult at first! this
may be due various reasons such as, rushing to meet the deadlines or commitments, lack of clear
understanding etc. But eventually it empowers you to evolve and scale rapidly compared to teams
who don't practice at all. It allows you to predict the way test cases should fail, provide
transparency in business requirements, alerts proactively regarding possible conflicts and
offers fail fast approach. In short it will be harder to make mistakes!
This signal indicates level of difficulty to write any
test case. The more the lines of code you are writing related to test setup the higher the level
of difficulty is! It makes test case hard to understand. Perhaps, it may also require you to
change code which obviously increase the possibility of introducing new bugs. It's just like
fire smoke detector causes fire in the house.
Problems:
1. Test
and code are tightly coupled due to too many dependencies (possibly violation of SOLID
principles).
2. Hard to maintain and find what the test is trying to
achieve.
3. Makes almost impossible to add new features without breaking existing, which
eventually grind to a halt.
4. Results fragile software
application.
Corrections:
1. Improve level of abstraction and
separation of concerns which will eventually yields desirable outcome that is testable.
Test-First Test Driven development strongly assists this kind of practice.
2. Again,
practice Test-First TDD. Naturally, this will bring simplicity and coherence, as you will make
your life easy by writing test before you writing any code that required you to do complex
setup.
This trap refers to a test case that breaks
encapsulation for the sake of writing assertions. Again, this usually happens when developers
start focusing more about achieving coverage threshold instead testing anything useful. Teams
who practice Test First - TDD and focus more on the real value of test cases do get decent test
coverage but using test coverage as a success metric will push you far away from
success.
Problems:
1. Tiny and little changes in code will break
test cases unnecessarily.
2. Over the time it's almost impossible for teams to maintain
and evolve the system independently.
Corrections:
1. Never
ever compromise encapsulation of system to support testing. Rather design code that simplifies
testing.
2. Improve abstraction and separation of concerns which will help different
components of the system to work and evolve independently.
3. Again, practice Test
First - TDD and think it as a tool that helps you to craft better designed code.
This trap refers to a test case which requires you to
create several mocks or stubs before you test anything useful. We do understand that mocks and
stubs are often necessary to inject dependencies, without them we may fall into a trap
i.e., Excessive Setup (discussed above). Perhaps, creating too many mocks is also
not rescuing from falling into the same trap.
Creating countless number of mocks is a
strong signal that your code requires refactoring. Again, poor coupling, violation of SOLID
principles are the major reasons that puts you in a situation like
that.
Problems:
1. Such test cases doesn't do anything useful,
instead they make your life tedious. It may fail unexpectedly, whenever any change occurs in
mocked object class.
2. Tightly coupled, hard to maintain and understanding what's going
on!.
Corrections:
1. Revisit the design of your code.
2.
Try to reduce creating several mocks by enhancing level of abstraction and separation of
concerns.
3. Think carefully before writing any test, focus on the desired behavior which
you are looked to test, instead driving test from the implementation.
4. Practice Test
First TDD this will keep you safe from such situations.
As the name suggests, this anti-pattern refers to test
which is comprised of countless number of test code and assertions. Most of the time, the intend
behind writing such giant tests is to verify some bigger chunks of the application in a single
test. This impacts readability and make things complicated from the perspective of application
maintainability. Just for the sake of convenience we tend to be grown them iteratively,
instead evaluate and test specific behavior that we are interested in.
Problems:
1. The intent in these tests is often
very hard to determine as it doesn't really tell you what's going wrong.
2. New joiner(s) can't rely on such test cases to understand basic functionality of the
application. Instead, these cases may bring confusion within the team.
3. Results fragile behavior.
4. Slowly and gradually your software will attract towards halting trap from where you
can't move forward without changing the existing code.
Corrections:
1. Break down your giant test cases into multiple UNIT test cases.
2. Make single assertion per test!
Robert C. Martin (Uncle Bob) once mentioned,
"You know what you believe by observing yourself in a crisis. If in a crisis you follow your
disciplines, then you truly believe in those disciplines. On the other hand, if you change your
behavior in a crisis, then you don’t truly believe in your normal behavior." (From Chapter 11 in
the Clean Coder)
Stop focusing more about achieving coverage threshold, instead practice Test First - TDD
honestly! This won't only helping you in achieving coverage benchmarks but also assist you the
direction that your design is taking. Hold yourself and team from falling such traps and keep
your test cases logically clean and reasonable, otherwise you will quickly lose your trust in
them.