Javascript Testing with Mocha, Sinon, and Chai

Recently, April 24th, I presented a short talk on some of the testing tools we use here at Fluencia. Primarily, the discussion is focused on Mocha, Chai, and Sinon, which are all npm modules that can be easily incorporated into a node project. In this post I'll give a somewhat abridged discussion of the topics covered in the talk.

Here at Fluencia, we use the tools Mocha, Sinon, and Chai to test our code. The testing framework developed from these tools and the synergy between them has been easy to build and maintain, and we wanted to share that with anyone who isn't doing testing or who would like to improve their testing setup. The audience for the talk was the awesome DC jQuery meetup group, but like I note in the slides, these tools can be used for testing back-end and front-end javascript.  I'll start with a brief intro to testing in general; if you are already familiar with the concepts, you can skip the intro and dive into learning the tools.


The first comment I want to make is about the importance of testing. I believe in the back of every developer's mind is the idea that testing is important. But very often code still goes untested. This likely stems from the belief that writing tests costs development time, or that code in a small project doesn't need to be tested, or possibly just a lack of know-how on how to set up a testing framework. To debunk the first argument, here at Fluencia, we have found that testing actually saves us time. Finding and fixing a particularly pesky bug can take hours, as many of us have experienced. In a few hours I could have written hundreds of tests, which quite probably would have prevented the bug in the first place. A stitch in time saves nine. The argument that a small project doesn't need to be tested seems to have some validity, but if you ever want to collaborate, having tests is the only way to reliably allow people to review your code and contribute. Not to mention that small code bases are no less likely than large ones to have difficult to track down bugs. Finally, if you don't quite know how to put together tests for your project, well, keep reading.

So far, we've discussed testing as an abstract notion. Now it's time to get a bit more practical (and also provide a few more arguments for the importance of testing code). Testing is often described in terms of unit testing and functional testing. A Lewis Carroll approved definition of the former is: unit testing tests the smallest testable portion of code. To me what that primarily describes is functions, since your code should be decomposed and organized into small functions. A unit test often follows the pattern: Function 'func' will always spit out 'bar' when passed 'foo'. Unit testing gives you a foundation that each bit of your code is working the way you expect it to. When tracking down bugs, we can assume with confidence that a well tested function is working as expected, and we can start investigating more nebulous (or less well tested) sections of the code.

Functional testing has a broader scope for a project. Functional tests often cover a 'slice of functionality' within the application and can touch multiple layers of an project and often multiple functions.  You can think of them in terms of the feature on the site that they test: the sign-in form works correctly. Whether a test is a unit test or functional test is not excessively important, and the gray area in between is a rabbit hole we could spend forever quibbling over.  In either case though, testing gives you a basic guarantee that your code is doing what you expect it to and helps in debugging.  If you are working with a team, tests can be a great way to prove that new code you have written works, and it ensures that any future code written by collaborators doesn't break existing code.

In the presentation I have a slide titled 'when to write tests', and I figured I would answer that question here too.  You can write tests before you implement a feature, which can help you plan, and sets up a finish line for the feature (test driven design, you aren't done until the tests pass).  You can write tests during your coding. If something isn't working quite right, you can write tests for each bit of new code, and you'll find your issue when one of the tests won't pass. You can write tests after you are done with a feature, to check edge cases and error conditions.  Essentially, the 'when' is up to you. Just make sure you do actually write the tests so you have proof the feature is working and to ensure future changes don't break that functionality.  If you put off writing the tests until later, there is a good chance later never comes.

Getting Started


Now let's get started with the actual tools. First, I want to introduce Mocha. Mocha is the framework for writing tests.  It provides the format for writing functions that test your code that fail if a test has an error. With that cursory introduction, we can now talk briefly about Chai.


An assertion library is the best way to check that values within a given test are what you expect them to be, and Chai is an assertion library that works perfectly with Mocha.  Chai sets up the keywords 'expect' and 'should', as well as the less common 'assert'.  One of the benefits to Chai is that expect and should can be chained with helper functions that allow the code to read similarly to plain English. If one of these Chai statements is wrong, it errors out and the test fails.

Example statements using Chai:

foo = "bar" "string"
foo.should.have.length 3
expect(foo) "string"
expect(foo).to.have.length 3
assert.isString foo, "foo is a string"

Now that we know how to test values, we can return to discussing Mocha and how to create tests.  Mocha provides the keywords 'describe' and 'it'.  Describe is used for organization; it is used to instantiate a test.  This is more easily shown than described.

describe "Array", ->
  describe "#indexOf()", ->
    it "should return -1 when the value is not present", ->
      arr = [1, 2, 3]
      arr.indexOf(5).should.eql -1

describe "Search", ->
  describe "queryURL", ->
    it "should generate translate url", ->
      url = search.queryURL "/translate", "toy"
      expect(url).to.eql "/translate/toy"


utput shows each tests that is run, organized by describe blocks.  You may nest describe blocks to organize your tests however you want.

For failing tests, a diff of the actual and expected values is displayed along with a stack trace.


In some cases, you may be testing an asynchronous function.  Mocha just allows you to provide a 'done' callback.

describe "Accents", ->
  it "should insert accents", (done) ->
    $(".accents-letter").click ->
      expect($("#query").val()).to.eql "á"

Test Independence

When writing tests, it is important to keep test independence. Basically, each test should be able to be run on its own and pass.  Some arbitrary application state shouldn't determine whether or not some code is working properly.  To maintain test independence in Mocha, there are 'before', 'after', 'beforeEach', and 'afterEach' hooks provided so you can control the state of each test.

describe "History", ->

COOKIE_NAME = "history"

describe "New Cookie", ->

  beforeEach ->
    cookies.removeItem COOKIE_NAME, "/"

  it "should store a history item", ->
    history = new History
      key:        COOKIE_NAME
      historyLen: 5
    history.add "item"
    expect(cookies.getItem(COOKIE_NAME)).to.eql JSON.stringify ["item"]

  it "should store multiple history items", ->
    history = new History
      key:        COOKIE_NAME
      historyLen: 10
    history.add "item1"
    history.add "item2"
    expect(cookies.getItem(COOKIE_NAME)).to.eql JSON.stringify ["item2", "item1"]

describe "Existing Cookie", ->

  before ->
    cookies.add COOKIE_NAME, "/", "existing"

  after ->
    cookies.removeItem COOKIE_NAME, "/"

  it "should deal with existing history cookie", ->
    history = new History
      key:        COOKIE_NAME
      historyLen: 2
    history.add "new"
    expect(cookies.getItem(COOKIE_NAME)).to.eql JSON.stringify ["new", "existing"]

Skip and Only

In addition to these hooks to set up your tests, Mocha uses the keywords 'skip' and 'only' to allow you to control which tests actually are run.  As you may expect, these keywords allow you to skip a test so that it is not run, or run just one test.

describe.only "tests currently working on" ->

it.only "should just run this test" ->
  console.log "Only this test is run"

it.skip "should skip this test" ->
  console.log "Nothing will be printed, test is skipped"

Note: both skip and only can be applied to describe blocks, and only one only flag will be recognized by Mocha (if you have multiple, Mocha will still just run one test).  Also, be careful about leaving your only flags in your source code.  If you do, you may come back and realize your tests were only passing because you were running just one test!  We have an specific grunt task dedicated to searching for these flags (you can see the code in the presentation slides).


Now that we have established a format for creating tests, its time to introduce the Sinon library.  Sinon is used for checking information about function calls and spoofing what those function calls return.  It allows you to test only what you mean to test.

A Sinon 'spy' allows you to wiretap a function.  You can get information about the number of times a function was called, what arguments it was called with, and numerous other details.  When using a spy, the original function will behave just as normal.

spy = sinon.spy()
spy = sinon.spy myFunc
spy = sinon.spy object, "method"

it "should display success message on save", (done) ->
  mySpy = sinon.spy User, "save"
  expect($(".save-success-message").text()).to.eql \
    "Your profile has been updated"

A Sinon 'stub' provides the same inspection functionality of a spy, but it allows you to replace the stubbed function with one of your own (often simply returning some fake data).

it "should show autosuggest", ->
  stub = sinon.stub Search, "getSuggestions", ->
    return ["act", "apple", "arrow"]

  $("#main-input").text "a"
  expect(stub.firstCall.args[0]).to.eql "a"
  expect($(".suggestion")).to.have.length 3
  expect($("#top-suggestion").text()).to.eql "act"

The combination of stubs and spies allow you to test one particular bit of code without having to bring in and set up any additional code that it depends on.  It is also great for testing edge cases and errors since you can stub a function to return whatever garbage you would like.

Synergy: Mocha and Sinon

Sinon and Mocha can be used to best effect by using the before and after hooks in concert with spies and stubs.

describe "Search"

  beforeEach ->
    stub = sinon.stub Search, "getSuggestions", ->
      return ["act", "apple", "arrow"]

  it "should show autosuggest", -> ...
  it "should show autosuggest for other language", -> ...
  it "should generate link for suggestion", -> ...

In fact, Sinon allows you to take this one step further.  You may frequently be setting up stubs/spies and then removing or modifying them for other tests.  Recognizing this, Sinon provides the concept of a 'sandbox', which allows you to keep track of all spies and stubs you set up and remove them all easily.  This works great with Mocha too.

describe "Search"

  beforeEach ->
    @sandbox = sinon.sandbox.create()
    @sandbox.stub $, "ajax", -> ...
    @sandbox.stub Search, "getSuggestions", ->
      return ["act", "apple", "arrow"]

  afterEach ->

  it "should correctly get suggestions", -> ...

Sinon allows you to do some exceptional things; it can stub AJAX calls and even act as a fake server that responds with the appropriate data (see the slides for code examples).  Combined with Mocha (and Chai) it's possible to accurately, independently test almost any code configuration.

Running Tests

Finally, once you have some tests written, you probably want to actually run them. It's very easy.  Mocha will automatically look in the /test folder unless you tell it differently.

$ mocha

For front-end work, you make a basic html page (with any elements needed), include mocha and your test files, and call  More details can be seen on the slides.

Video, Slides, Audio

You can see a short clip of the presentation here.  The slides can be found here and the audio can be found here.

Good luck testing!