Testable & Tested Client-side Code

Testing (i.e. linting, unit tests, integration testing etc..) client-side code is not done as commonly as it should be. The reason it is so commonly not done, besides lack of know-how, is that it is presupposed that it will take time away from other more productive development tasks.

This fallacious notion is, of course, wrong. The repeatable successes in software engineering based on testable (i.e. modular, loosely coupled, small, simple units of code) and tested code has proven again and again to be a time-saver and part of creating maintainable and understandable code. At a minimum, if code is not unit tested it is only a matter of time before it is burnt down and re-written, or abandoned altogether because it becomes unmaintainable and incomprehensible.

In this article, I am going to defend and talk about testing client-side code. It is my intention that the information in this article will give those among us who do not test, the desire and some initial testing knowledge to test, along with the ability to defend its necessity from any agent that might deter testing.

In addition to this I would like to assert, dogmatically, that while some testing can be neglected for a period given certain requirements (i.e. prototypes, deadlines), unit testing stands alone as a non-negotiable necessity for anything that requires a shelf life in a production environment.

Why Test

Nobody is perfect and testing anything that a human produces simply makes good sense. Programmers specifically should test their code because it is known that high quality code is testable, understandable, and maintainable and it is tested code that paves the road to this end. Said another way, if code is tested, then it is typically understandable and maintainable because it has been written to be tested.

When code is tested the following benefits occur:

Writing Testable Code

To create testable code, developers should follow the principles listed below.

These are not only good principles for testable code, many of the above principles are just good practices in general when doing software development. Testing and software development practices go hand and hand.


Before talking about testing, I should state that a developer should strive to automate testing tasks. Do not fool yourself into believing that you or your co-developers will manually do anything. It is better to assume that as humans, we do not do manual tasks consistently. Consistency is found in some level of automation.

Before you can get serious about testing, I believe you have to get serious about automating development tasks. Personally I use grunt.js for automating tasks and highly recommend it. If you do not have a favorite task runner, I strongly suggest you pick one and get intimately familiar with it.

In this article, I am not going to show how to automate the testing tasks (linting, unit testing, integration testing, etc..) discussed, I am assuming it is obvious that no human should be manually running testing tasks when developing code. Automate that junk!

Once you have mastered the automation of testing tasks, the next step is to start thinking about automating builds and deployments (i.e. continuous integration) which make use of automated testing tasks. Imagine committing code that is automatically linted, unit tested, built, integration tested and then deployed to production on each commit.

I will stop my digression into automation now and we will spend the remainder of the article looking briefly at testing client-side code.

Quality, Convention, And Error Testing (aka Linting/Hinting)

When one thinks of testing code, a linter does not always comes to mind. Linters can detect errors and potential problems in your code before you unit test it. It is basically a validator that can be subjectively configured.

If you are not currently linting your code, you should be. It is easy to implement and can save hours of debugging.

My personal choice is js hint, css lint, and html hint. I use jshint in my code editor, having it check my JavaScript code as I type. When I do a production build, I will lint everything (html, css, and js) right before I run unit tests. The linting of css and html might feel like overkill but linting these types of files can enforce conventions and consistency which can eliminate bugs that might occur when scripting the DOM (i.e. HTML) or CSS.

If you have no experience using a linting/hinting tool I am demonstrating below the hinting of favico.js (highest trending JavaScript repository on GitHub today) using jshint.js.

You can find a web based interface for linting/hinting html, css, and JavaScript at:

Unit Testing

Unit testing involves writing test code (i.e. the test) to test a unit (i.e. a module) of application code. Exactly when the test is written, how it is written, and what is tested is a rather subjective topic and many opinions on the matter exist.

As a baseline, if a unit/module of application code provides a class, singleton, or constructor the instantiation/options, properties, and methods/arguments should be tested. When I say it should be tested, I mean the module should be ran in isolation to verify that all the code in the module runs as expect.

Unit testing sounds simple and in some cases it is. However, when unit testing applications you'll need to find a way to test complicated dependancies. This involves testing events (both dom and network), externally dependent information (i.e. data), and proper collaboration with the DOM. This is where test dummies (i.e. spies, stubs, and mocks) can step in and aid testing efforts. I'll say more about test dummies in a second, first let's look at a simple unit test.

The first step in unit testing, assuming you have testable code, is selecting a test runner, a testing interface style (i.e. BDD or TDD), and assertion solution.

A test runner, obviously, runs the test code and the code that is being tested. Common solutions are:

Each of these test runners have differing opinions on testing methodologies and my preference is Mocha, given its flexibility on assertion solutions and testing interfaces. My preference, with Mocha is to use the chai.js assertion library (BDD style) and the Mocha BDD interface (i.e. describe(), it(), before(), after(), beforeEach(), and afterEach()). However, Mocha permits the usage of most any assertion solution and offers other interface styles besides (BDD) for writing tests.

The easiest way to get your head around unit testing is to just start. The fictitious code below showcases the unit testing of a foo.js module using Mocha's BDD interface and chai.js BDD style assertions. Read carefully the following code and comments.

mocha.setup('bdd'); //tell mocha which interface you are using

describe('foo.js', function () { //describe unit of code to test

    //setup fixture 
    var fixtureFoo;
        fixtureFoo = new Foo('Foo Foo');

    describe('instantiate Foo', function () { //describe what you are testing 
        it('should return an Foo() instance', function () { //it should...
            chai.expect(fixtureFoo).instanceof(Foo); //assert test

    describe('Foo getName() method', function () { //describe what you are testing 
        it('should return string', function () { //it should...
            chai.expect(fixtureFoo.getName()).to.be.a('string'); //assert test

    //tear down fixture
        delete fixtureFoo;



Below is the HTML output from the Mocha test runner, when running the foo.js test in a browser.

As you can see above, both our tests are passing. This example is rather trivial and assertions alone are not enough when it comes time to unit test more complex modules. This is where test dummies step in and help keep testing modular. Common test dummies include spies, stubs, and mocks. Sinon.js is the current go-to solution for test dummies.

I am not going to go in-depth here on using test dummies, but I will provide a short definition (from sinon.js) for each so you can begin to see the purpose of test dummies.

spies: A test spy is a function that records arguments, return value, the value of this and exception thrown (if any) for all its calls. A test spy can be an anonymous function or it can wrap an existing function.

stubs: Test stubs are functions (spies) with pre-programmed behavior. They support the full test spy API in addition to methods which can be used to alter the stub's behavior. As spies, stubs can be either anonymous, or wrap existing functions. When wrapping an existing function with a stub, the original function is not called.

mocks: Mocks (and mock expectations) are fake methods (like spies) with pre-programmed behavior (like stubs) as well as pre-programmed expectations. A mock will fail your test if it is not used as expected.

To get going on this rather complicated practice I suggest reading, "Unit Test like a Secret Agent with Sinon.js" and the sinon.js documentation itself. It is almost certain that if you unit test non-trivial code you will need to be familiar with test dummies to get the job done.

Code/Test Coverage Reporting

Code/test coverage measures (typically a %) how much of a module (i.e. foo.js) was tested by keeping track of what was executed in the file and what was not executed.

Let's consider again the foo.js unit tested. If the getName method is not tested, a code coverage report can be used to reveal this fact. In the jsFiddle below I am augmenting the Mocha HTML report, testing foo.js, with blanket.js so the page contains coverage results as well as unit testing results.

Make sure you click on the file (i.e. foo.js) in the Blanket.js table because it provides a handy visual showing which lines of code (highlighted in red) were not executed during the unit test.


Having a code coverage report gives insight into the fact that my unit test is not fully covering the functionality found in foo.js (i.e my getName method is not being tested).

It is not uncommon for the data provided by a code coverage report to be used during a production build, enforcing a certain percent of coverage or else the build fails.

Code Complexity Analysis

Once you have units tests and have made sure you have enough unit tests, the next step is measuring-analyzing complex code and reducing areas of complexity.

A common solution for creating complexity data is complexityReport.js which creates the data used by the visualization tool plato to output an HTML complexity report. Below I show an example of a plato overview report for each modules that makes up the jQuery source code.

enter image description here

The jQuery complexity report contains the following measurements (definitions provided by http://jscomplexity.org/complexity):

Plato provides not just an overview visualization, it also provides a visual report on individual files as well. Below I am showing a complexity report/visualization on the jQuery dimensions.js module.

enter image description here

Not unlike the code coverage report, the data created from a complexity report can be used during a build process to control the complexity of code by enforcing thresholds. In other words, if a file has too many lines of code, is too complex, or is grossly unmaintainable, a build process can detect this and fail before code moves to production.

Integration Testing

After pre-production testing and building your application code into its production ready state, it is time to do integration tests. Integration testing is testing the result of all of the code units together (i.e. the real application), in its production environment.

Typically, integration testing equates to automating the environment and execution (i.e. a web browser loading you code) of an application from a set of tests/instructions.

In the old days, we did this manually. We would simply have 7 or 8 browsers open and when we saved a file we would simply push reload on each browser then view and or interact with the results hoping nothing was broken. Today, this process can and should be automated where it makes sense.

Using one of the tools below a developer can write and run integration tests for multiple web browsers or headless browsers (i.e. no GUI) as part of a post build process.

Each of these tools make it possible to automate the loading of a web page and then provide the appropriate hooks to interact with a web page (i.e. filling & submitting forms, click & follow links, capturing screenshots etc..) in a testable fashion.


We have briefly looked at why we test (i.e maintainable and understandable code) and how we test (i.e linting, unit testing, code coverage, complexity reporting, integration testing). I hope this has provided enough information for any client-side developer to become a champion of testing and at the very least a champion of uniting testing.

With a broad overview of testable and testing client-side code complete the next step would be to diving deeper into unit testing. For that I would like to suggest the following training videos.

Additional areas that should be explored after having testable and tested code would be: