In this series of posts, I will explore unit testing in c#. So, the first question is:
Unit testing is a way to test the logic of a piece of code in a controllable and repeatable way. There are several things we need to ensure we have done to make our code inherently testable. These include removing the reliance on dependencies in the code. This allows us to test an isolated piece of code.
Your tests should also be able to run in any order as you can't guarantee what order they will run. You can't rely on a particular test being executed first to set up the next.
Unit Testing isn't just wrapping a test from your MVC Controller down to your database to make sure it gets the right data. The clue is in the name. We only want to cover a single method (unit) of code with our test.
In fact, under no circumstances should your tests hit the database. The main reason being that you are then testing code against something that could change. What happens if someone changes the underlying data? Your tests will fail. Which means your build will fail. But, in this case the code is still valid. As we will see in later posts, we can get around this without needing a valid database connection.
The other reason is that you test and deployment server shouldn't need access to any database. Your Sys Admins have probably already told you that.
Unit testing isn't a way to say that the code works perfectly, just that it works in the way you told it to. There is still room for human error! If you test the wrong things, then you won't know if the code is doing the right thing.
The main thing that unit tests will give us in confidence. When you are building a green field project this confidence will allow you to know that a particular method is (and always is) doing it's job. From here, as you start layering code on top of it you can have confidence that the initial code is still doing it's job. If you persist with unit testing as you build, then come release time you are a couple of key strokes away from validating your entire code base.
Most developers I speak to fear release day, but with solid test coverage (and confidence in your code base) it will really take the stress out of it.
Now, I'm not saying there won't be bugs! However, when a new bug is discovered (assuming our code base is testable) we can simply add the bug to our unit test suite. This will then show us when we have mitigated this bug. With this new test in place, it means that bug shouldn't show it's face around these parts again!
That all sounds great, but so many companies I speak to don't have any test coverage at all.
Here are a few of the main reasons I have been given, and why they don't really stack up.
I put this first, as this is the worst excuse of them all. Our whole job as developers is to deliver quality code at our employers request. If we keep delivering software with more bugs than a rabid badger then we won't be there long.
Also, where did this idea of writing unit tests along side our code takes more time come from? Firstly we need to take into account the bug fixing required of shotgun coding. If a project takes 2 weeks without unit tests and 3 weeks with (although I'd argue the disparity) then at what point do you add in the extra weeks of bug fixing into your time frame.
I have seen so many projects over run their time frame just due to the perpetual amount of bugs being found in UAT. Then it is compounded by what I call 'Blame Bugs'. These are the ones where your code has been broken by "that piece of code over there, it wasn't my change". Again with unit tests, we can mitigate breaking our existing and working code base when we add new features.
Generally, in my experience, the problem is with the developer and not the company when I hear this.
First we need to define legacy code. I hear so many developers use that phrase when actually they mean one of:
I agree with Michael Feathers in his book "Working effectively with Legacy Code":
"To me, legacy code is simply code without tests."
If the job that the code does is still valid in your business, then how can it be legacy? Yes, technology may have moved on. with a decent test suite you can change anything about the code you like and if the tests still pass it is doing the same job!
In a later post I will investigate how to refactor legacy (read non-testable) code.
If you tests need 'maintaining' then there is something fundamentally wrong with your tests, your code or your understanding. The first two are easy to fix, the third is often more difficult...
If your tests only cover a unit of code, and the dependencies are passed in, how can this need maintaining? If the code is a dependency in something else, then we should be able to change this functionality without affecting anything else. See the Open/Closed principle of our friendly neighborhood SOLID principles?
I saved this until last. I hoped the previous ones would soften the blow. Read that reason again, go on read it.
The one thing that really (and I mean REALLY!) grinds my gears is dogmatism.
Maybe they had a bad experience with unit testing? Maybe they love maintaining legacy code? Maybe they love shotgun programming? I'd say that maybe their tech lead status is a little misguided personally.
Even if Unit Testing isn't for you, to rule it out for everyone just isn't how a tech lead should be.
In this series: