Node.js and ES6 Instead of Java – A War Story
Part II: The Joy and Pain of Test-driven Development
* AGOF Digital Facts 2015-06
If you missed part I, read up on it here on the eBay Technology Blog Europe.
What Joy? What Pain?
Hi, I'm Patrick, software engineer. My team and I work for mobile.de, which is Germany's biggest online marketplace for cars and other vehicles.
In the second part of my series, I'll focus on automated tests for the backend.
I had written web apps in Java back in the days before I moved from backend to frontend. Building on that experience and looking at the mess the old mobile.de home page web app had become over the years, one thing was crystal clear:
You cannot write maintainable code without proper unit test coverage.
If you have proper test coverage, you can easily refactor your code without having to fear to break something without being aware of it. This has happened to me countless times. I'll make a wild guess and assume it has happened to you, too. 😊
There's a German saying for this: “Mit dem Arsch einreißen was man mit den Händen aufgebaut hat.” (tearing down with your ass what you've built up with your hands)
Automated tests are the only way I know of to obviate this phenomenon.
With a good test setup that runs your tests automatically as you write your code, you can concentrate intensely on solving the problem at hand, without being distracted by things like reloading the page in the browser or restarting the server. You can deep dive into the zone. To me, this is joy indeed.
So What's Not to Like? What's the Pain?
Well, on the mobile.de home page project, I spent more time writing tests and stubs and mocks and fixtures and what not than I spent writing the actual production code. It was maddening sometimes to write tests for things like asynchronous calls, promises, timeouts, etc. The urge to just let it slip and not write a test for some module was sometimes overwhelming, especially given the time constraints we had. I'm very proud to say that my team mates and I resisted the temptation in most cases.
Mocha's syntax is very similar to Jasmine's. You formulate your test cases as a series of nested describe function calls with an it function call containing your assertion:
The console output of this little example looks like this:
Writing Testable Code
In contrast to Java classes, Node modules are singletons by nature, if you use them in multiple places, you always get the same instance of the module with the same scope.
When I got started coding the new mobile.de home page's backend, I found this really convenient and elegant. “No need to write all that boilerplate code with class instantiation and what not like in Java,” I thought.
Here's a (very simplified) example of a node module makes.js that provides a list of car or motorbikes makes, which can be used for populating a dropdown menu on a search form:
The module imports a configuration module config.js, gets a path to a local JSON file with make data, imports this with require and exposes a get method that accepts a segment (“car” or “motorbike”) for getting a list of makes.
The config module config.js looks like this:
And, finally, this is the Mocha test suite test.js:
Running npm test on the console gives us this output:
What's wrong with this? It is not really a unit test! We are not only testing makes.js, we're also testing config.js and even makes.json. If makes.json is updated, it might break the test for makes.js. We only want the test to break if something in makes.js is changed, not some JSON file we don't care about.
OK, so let's just create a mock for config.js in the before method and let that mock return the URL of a mockup JSON file with bespoke test data.
But wait, how can I mock the config module? It is in the scope of the makes module the instant it is imported. The fact that the makes module is a singleton, as mentioned earlier, makes it quite hard to slip our testing fingers inside the module and switch the config to a mock.
“It's a Trap!”
What would Admiral Ackbar do? Certainly not what I did at this point. I stepped into the trap of refactoring my test code around a system that was hard to test because of my poor architecture choice.
Here's a slightly less crappy version of makes.js:
The only difference to the previous version is that the config module now has a method getPath that gives us the path to the JSON file with the data. Using a method instead of a property allows us to mock the config module using Sinon.JS.
There's only one problem: we still can't slip the mock to the makes module, because it is instantiated the instant it is loaded.
To work around this, I used a small library named freshy. freshy allows us to load a fresh instance of a module whenever we want, without getting the same instance over and over again from the npm cache. This way, we can load our makes module after the config mock has been created.
The slightly less crappy test looks like this:
I'm mocking config before instantiating the makes module with freshy, it works – I was satisfied, and wrote many, many tests like this.
The Horror, the Horror
It turned out that outsmarting the npm cache was not such a great idea, especially when writing tests for asynchronous operations in this manner. It worked fine when running the tests locally, but when running them as part of our distribution build on Jenkins CI, we soon experienced builds that were hanging and tests that failed with timeouts. It got so bad that at some point we were forced to actually disable our test suites to be able to deploy changes to production, without having to retrigger the build multiple times until it finally finished with green tests.
A Better Way
The hard learned lesson here is:
We eventually refactored a lot of our modules to not be instantly available singletons, but provide an init method that instantiates the module.
A revised version of the above example looks like this:
makes.js now has an init method. The JSON data is only loaded when the init method is invoked, not immediately after the module was loaded.
Measuring Test Coverage
To run the Mocha test suites and measure the code coverage, we use our build system, which is based on npm script runners that trigger various Gulp tasks. The tests are run with npm test or as part of the distribution package build that is run with npm run dist.
For measuring the code coverage, we use Istanbul, the Istanbul plugin for Gulp and Isparta, which provides an instrumenter that makes it possible to measure code coverage on ES6 files that are transpiled through Babel (we use Babel for both the backend and frontend).
Some things to note here:
Take a look at the comments at the beginning of the file: gulp-mocha and Isparta have bugs that force us to use some workarounds. In my experience, this is fairly typical when working with npm modules. You have to embrace the fact that software is never perfect and accept it. The good news about open source software is that you can fix these bugs yourself by creating a pull request, or at least contribute by reporting the bugs. You quickly find workarounds on GitHub or Stack Overflow, or have to pick some other solution for your problem. I've gotten in the habit of putting links to these issues in my code's comments and then revisit once in a while to see if the problem was fixed with a new version of the module.
The istanbul.enforceThresholds property makes the build fail when the code coverage drops below a specific percentage. We currently have this set to 84%. Since we have a pre-push hook that executes the tests before pushing to Git, this means you cannot push new code without proper unit tests. Needless to say, this can be annoying sometimes, but it helps a lot to keep our code clean. 😊
Another important configuration detail is includeUntested: true (towards the end of the code example). If you don't set this, Istanbul will only measure the coverage of modules that actually have an accompanying unit test. Modules that don't have any tests at all are not included in the coverage report. I only found out about this a few weeks ago. Up until then, I had often bragged to people: “Yeah, you know. We have 98% test coverage.” After turning this option on, I found out it was actually just 84%.
Here's a little video of our test build in action:
To Be Continued…
This concludes part II of my series “Node.js and ES6 Instead of Java – A War Story”. I hope you found it useful.
In the next parts, coming soon to the eBay Technology Blog, I plan to write about:
- Practical application of ES6 features and working with Babel
- Working with Dust templates and using helpers
- MVC architecture with Express
If you missed part I, read up on it here on the eBay Technology Blog Europe.