Jonas Schubert Erlandsson

This piece was originally posted on the blog of my codeworks but I was asked to repost it here as well, so I did :) There is a links section at the end to the other posts that lead up to my writing this. I highly recommend you spend some time there to get a more complete picture of the arguments. Especially relevant for my post is the three posts by DHH that sparked it all.

As an aftermath to this storm of blog posts (the list at the end is by no means a complete list, just my pick) ThoughtWorks held a live Google hangout with Kent Beck, Martin Fowler and DHH that is now on YouTube, there will be a second and possibly third installment of that as well.

On what has been said

It’s been interesting to read all the posts around DHH’s recent writings on the death of TDD and the various responses. I picked up TDD about a year ago and am still working on getting the most out of it. So I read, and am influenced by, a lot from the people that responded to David’s posts.

I find a theme in David’s writings that troubles me. I have a computer science background and try to argue my convictions dispassionately and based on fact as much as possible. I find very few facts in the presentation of the faults of TDD that David makes. It was the facts around TDD the convinced me it was worth learning in the first place. That it did lead to better products in the end.

Some of these things are apparent, both to David and to me. He finishes his “Slow database test fallacy”-post by acknowledging that deep coupling in the code is bad and that test coverage is good. So far we do agree and generally I have found that when someone doesn’t agree with the tenets of TDD it’s often because they do not understand or agree with the underlying design principles, such as the SOLID principles, or don’t believe that they are in fact traits of good design.

I want more facts. Show me where TDD hurts design. Give practical examples of the damages caused, just like proponents of TDD give examples of how it improves design. Or at least argue on a factual, case by case basis.

Pragmatic TDD

I’m firmly in the pragmatic camp, nothing about TDD is religion to me. I’m currently preparing for my masters thesis, it’s on our cognitive limitations as humans and how that relates to developing complex computer systems. To summarise all the reposts and experiments I have read; we suck at programming at a cognitive level.

Humans did not evolve to build complex, abstract systems in abstract, formalized representations of logic. We have so many things against us that it’s a miracle we got this far. TDD gives us a set of tools and principles that help decompose this complex task into sizes we can control and grasp in our brains.

We decompose to abstract away parts of the problem that is not relevant right now. And this is not optional, even if you do not decompose the problem your brain will happily ignore the parts it feels is not important, and that is not the same parts you would have ignored, trust me.

So I see TDD not as a religion but as a toolset. A toolset the same as object oriented design or functional programming or the language I choose for solving a problem. It’s all about the results and I can happily admit I don’t do 100% TDD and I don’t think anyone should. Heck, even Gary Bernhardt admits he does 50%-70% TDD, so why would I feel ashamed?

TDD is a toolset, learning to use that set of tools naturally requires you to find out when they work and when they don’t. As a beginner I try to do 100% TDD on larger projects that I know will live for years. That’s where I’ll get the biggest payoff for my slowdown. But I don’t TDD shell scripts, prototypes or weekend projects. I don’t have time to do that yet, but I will eventually, as I speed up.

More science

There have been studies into what effects TDD have. A summary of such studies was compiled in 2006 by Maria Siniaalto in her paper “Test-Driven Development: empirical body of evidence”. In 2011 Tomaž Dogša and David Batič published “The effectiveness of test-driven development: an industrial case study” and this year, 2014, another meta study from Helsinki university, “Effects of Test-Driven Development: A Comparative Analysis of Empirical Studies”, was published by Simo Mäkinen and Jürgen Münch.

In all of these the conclusion is a weak tendency towards increased quality and increased development time. And they were all conducted on people with mostly no previous TDD experience. Let’s analyse this for a while: All the big names in TDD seem to agree that it takes years to achieve your full potential in TDD (as with anything it’s those 10000 hours to achieve mastery again), and as a beginner I can attest to the initial increase in time. After all you are learning a new way of working, new tools and a new way of thinking.

To me the astonishing thing here is that you can measure a positive effect at all when giving someone a few weeks to a few months with TDD. But that it exists at all speaks volumes to the effect of TDD in the hands of someone who has had a few years of training in the technique.

If we agree that at least some of the things that gets enforced with TDD, such as decoupling, modularization, composition, smaller objects and methods, are good things. And if the science shows us that even beginners get some of these benefits right of the bat. Why should we, the proponents of TDD, not stand proud and talk warmly about a set of tools and principles that we believe in and that give proven results?

Troubling

This is where that trouble with David’s writings come back to haunt me. Where are the facts? He might have very good reasons for his views that TDD is bad for design, I don’t believe that but I’m always prepared to listen to new facts and reevaluate reality in the light of them.

And then we get to the part about tests and time. I feel strongly that TDD is not about the tests in themselves, they are in the order of a very nice bonus, but in the design that springs from writing tests first. That the tests become fast when you only test one class at a time is nice and lets you adopt a different workflow, such as that Gary Bernhardt talks about, sub-second test runs that basically just exists in the corner of your eye and stops you at the moment things go wrong.

That is a very powerful place to be and if you have been there you do not like to have it taken away. That might be the reason behind some of the focus on test times when it comes to Rails. But that is only because it prevents the fluency of thought. And since you can get that by decoupling from the framework most choose to do so. This is not really something that reflects badly on Rails specifically, it reflects badly on any large system.

In closing

I hold some personal opinions about the early design choices in Rails, its tight coupling to the database and mashing of helpers into a single namespace etc. Most of this is being improved on as you read this and have been improving for years. But I cannot help myself wondering if these problems of coupling and dependencies would never have existed if Rails had been built test first from the start?

And it doesn’t help Davids case in my mind that when I look at Rails, that he famously designed, I see it suffering from some design problems that could have been solved using TDD. All the while he’s talking about the design damage TDD incurres…

It smacks more than a little of arrogance and that, together with the fact that his arguments are non-factual and based on emotion and opinion, makes it hard for me to take him seriously.

As Tom Stuart so poignantly puts it in his recent lightning talk from Scottish Ruby Conference (and I’m paraphrasing): “DHH is just one man. His experiences are important, but they are just the experiences of one man.” I would add to that that DHH is young, in relative terms, and trying to duke it out with some serious heavy weights when it comes to experience and TDD.

I choose to accept the, mostly, dispassionate, fact based arguments from Kent Beck, Corey Haines, Robert C Martin, Gary Bernhardt and many more over the passionate, loud and mono-experience views of on DHH.

What do you choose?


Further reading

TDD is dead. Long live testing. – Davids original post.

Monogamous TDD – Robert Martins reply to Davids post.

Test-induced design damage – Davids second post.

Design-Damage – Robert Martins reply to Davids second post.

Slow database test fallacy – Davids third post.

When TDD does not work – Robert Martins reply to Davids third post.

TDD, Straw Men, and Rhetoric – Gary Bernhardt reply to Davids post about the slow database test fallacy.

Speeding Up ActiveRecord Tests – Corey Haines’s take on the problems with TDD David puts forward.

UnitTest – Martin Fowlers correction of history and dive into the origin of unit testing.

The DHH problem – Tom Stuarts to the point and very funny lightning talk from Scottish Ruby Conference 2014.

  • datnt

    I have a question for you, have you ever written any large/complicated web platform by using Ruby On Rails and combine with TDD (e.g RSpec…) and/or front-end automation testing (or system integration testing) (e.g BDD with Cucumber ) ?

    • http://d-pixie.github.io Jonas Schubert Erlandsson

      Yes, several. I have personally found that many of the techniques used in the TDD community when it comes to Rails help a lot, for example the decoupling of persistence logic from the models makes them a lot easier to handle during test.

      So far I have found that the coupling between the different sub systems in Rails is a problem when you start to build things at scale and especially if your needs are different from the needs that drove the framework decisions, i.e.. the needs of Basecamp. Apparently a lot of other people have also noted this since there has been an ongoing and persistent effort since, at least, Rails 3 to modularise the framework more and make the different components stand alone.

      But thats my opinion, admittedly shared by a lot of people, as underlined by the “I have personally” in that description. I’m assuming that you did not just have questions on best practices but feel that my arguments against DHH’s position is lacking in some sense?

      • datnt

        Technically, I share your idea of using TDD at Model layer of Rails would result in having a lot better decoupling and enhance the ability of scalable a lot.

        I think DHH mentioned the similar thing within his articles about TDD. I realize that the difference and hence, division about discussions among TDD practitioners is about DHH’s statement about testing at Controller and View layer: whether it is called “unit test”, whether “it is system integration test”, whether “create a service layer for them and test as ‘unit test’ “…

        I think those problems contain BIG/complicated concepts, and it is very meaningful if some Expert could break them down into specific Cases, and Ruby developer community could have a better Terminology/Technology to aim at when they discuss/solve/provide code example for them.

        I also realize that most of the critics against DHH’s ideas do not mentioned “much” about testing at higher level by using Cucumber (and similar technique…) to perform system integration test.

        Another thing I realize is that people use too much statement with too many terminologies so that each one of them don’t agree with each other, and less providing Ruby/Rails source code and actual hands on.

        • http://d-pixie.github.io Jonas Schubert Erlandsson

          My main problem with DHH is that he is so very unbalanced :) He talks about his experiences as if they where an universal truth and tries to steamroll over the opinions of people with vastly longer and more diverse experiences.

          As for the problem with testing Rails … Well, let’s face it; Rails really doesn’t do much to help you when it comes to decoupling modules and making them easy to test. The framework contains a lot of magic, mostly beneficial, and as with all magic there is a price to pay.

          When the ruby object mapper project finally comes to it’s next stable release I think I’ll be using that instead of ActiveRecord to get away from the mess with persistence and validations in the models. Until then I try to keep pure ruby models and use the ActiveRecord models as the repository more or less …

          I have not had any problems integration testing (I normally use RSpec for this as well). The API you get with RSpec and Capybara is pretty decent. Liberal tagging of the DOM with intention revealing classes helps you target the right elements even if the context changes. So I don’t really find many problems here. But, and it might be a major “but”, I have not done any high level testing with turbolinks :)

          I’m green fielding a project now with Rails 4.2 and will BDD it, so I’ll soon know if turbolinks adds any complexity to this.

          • datnt

            I’ve worked with turbolinks before. At first, it added some value, but later on, when project got bigger which related to multiple ajax call of Get/Post per page then things gone crazy, we have to disable it on the whole.

          • http://d-pixie.github.io Jonas Schubert Erlandsson

            Mmmm, I can see what thy where trying to do but it feels like you have to build your site around it more or less. So I’ll probably end up disabling it as well :)

          • datnt

            Hi,

            I’ve recently came across a very interesting article:

            https://blog.jcoglan.com/2014/05/01/do-i-need-di/

            Although this article’s title may misleading the “potential” reader, the content is very correlate to the debating of “TDD’s liveliness issue”.

            Here is some recap:

            – Author also mentioned a similar observation which I wrote above, it is “…The impression I’m left with is that talks on TDD often only make sense if the audience already understands and agrees with the speaker’s argument….” –> currently audiences & speakers do not agree on many terminology, right?

            – The approach of the author is really helpful, thoughtful, and practical

            (e.g:

            …Your job is to keep shipping useful product at a sustainable pace, and it’s on you to choose practices that help you do that…

            OR … Applying dependency injection solely to achieve better tests is a bad move …

            OR … there is certainly a place for DI, or for any architectural technique, but you must let the requirements of the problem – not just testing concerns – drive the patterns you use

            )

            Based on jcoglan wrote:

            … In the case of DI, I reach for it when:

            – There are multiple implementations of an abstract API that a caller might use

            – The code’s client has a free choice over which implementation to use, rather than environmental factors dictating this choice

            – To provide plugin APIs, as in the case of Faye’s engine system or the protocol handlers in websocket-driver

            So, I think, normally, a web-application, or some times web-platform would rarely satisfy these 3 conditions above to apply DI, but exception do exists.