Having good test coverage of your code is a noble and worthwhile goal. However, sometimes resolving test failures can be an enormous time-sink. In this post we take a detailed look at one order-dependent test failure which I recently encountered. In the process we uncover an important lesson to bear in mind for your own controller specs.

Introduction

Having good test coverage of your Rails app is a great thing. But unexpecteed failures in your test suite can often leave you scratching your head. Often the problem stems from the code under test, but sometimes the problem is a quirk of the test suite itself. One particularly nasty class of problem is when the test suite output depends on the order in which tests are run.

In this post we look at a specific case of an order-dependent test failure that recently caught me out. Getting to the bottom of this mysterious test failure took me deeper than expected and taught me a valuable lesson of which I was, hitherto, blissfully unaware. If you aren't interested in the details, skip straight to the summary to uncover this potential foot-gun.

Rails has always been a strong proponet of unit testing, and has long offered baked-in support for testing. One of the most popular testing frameworks for Rails applications (although not actually bundled with Rails) is rspec, which is straightforward to add to your Rails application via the rspec-rails gem. This is the testing framework that I will use in this blog post, and for the remainder of this post I will use the terms test and spec interchangably.

Getting to basecamp

The case we are going to explore is an example of an order-dependent test failure. Failures of this type should not happen in a perfectly-constructed test suite. In an ideal world every spec should be completely isolated and idempotent. A failure (or side effect) in one spec should not have any impact on any of the other specs in your test suite. However, in the real world you may make some compromises as you compose your test suite. Perhaps you initiallly seed some static data in tables at the start of your test run, allowing all specs to assume existence of this data. Perhaps you do some costly setup at the start of a group of specs, to avoid repeated setup in each individual spec. In both of these cases, the shared state between your specs mean that there will exist conditions where order-dependent failures can creep in.

The reason these order-dependent failures are so nasty is because the failure surfaces in some test that may be far removed from the offending test that is responsible for corrupting the shared state. We don't know which spec is the culprit, we just know it must have run before the failing spec. This may be something you have come across before, or maybe not, but here is a typical narrative and the process you might use to zero-in on the problem spec:

  1. New code is merged to master
  2. Completely unrelated spec (or set of specs) starts to fail
  3. We run these specs in isolation and they all pass; so it looks like we have an order-dependent failure
  4. If the recent merge was small* then the offending spec might be obvious
  5. Otherwise, use some type of bisect approach to zero-in on the offending test, for example:
    • Identify all tests that have run before the failing spec
    • Run half of these specs again, followed by the failing spec
    • If spec continues to fail, the problem must exist in this half; otherwise the problem is in other half
    • Repeat the process
Using this process we should be able to converge upon the test that is causing the problem. We will pick up our story after this point, where we have identified the offending spec that is causing the problem, and we also know the subsequent spec that is failing as a result. In the next section we look at the contents of both of these test files.

* There is a general coding best-practice which is to aim to make small, frequent and atomic commits. This practice, along with running your test suite frequently, would avoid a lot of the headache in trying to identify the offending spec.

The Setup

We want to set up a couple of tests to demonstrate the issue that was encountered. We can boil it down to two spec files spec/test_1_spec.rb and spec/test_2_spec.rb, which are listed below:

    
# spec/test_1_spec.rb

require 'rails_helper'

describe ApplicationController, type: :controller do

  class MockApplicationController < ApplicationController
    def some_action
      "Blah"
    end
  end

  describe '#some_action' do
    before :each do
      @controller = MockApplicationController.new
    end

    it "should return expected String value" do
      expect(@controller.some_action).to eq "Blah"
    end
  end
end
    
  
This is a pretty simple controller spec; it creates a mock subclass of ApplicationController, and asserts the return value when some_action is invoked on an instance of this mock class. The test is pretty self- contained and does not rely on custom application code, so hopefully you can appreciate why this test will pass.

The next spec file, spec/test_2_spec.rb is a feature spec and is defined as follows:

    
# spec/test_2_spec.rb

require 'rails_helper'

describe "Widgets", type: :feature do
  describe "index" do
    it "should return 200 status" do
      visit "/widgets"
      expect(page.status_code).to eq 200
    end

    it "should display the widgets index page" do
      visit "/widgets"
      within("h1") { expect(page.body).to have_content("Widgets") }
    end
  end
end
    
As a feature spec it is supposed to exercise the full stack, and makes use of the high-level visit method to visit a particular URL in our application. In this case we have used the rails scaffold to set up some boilerplater for a Widget resource. The feature spec visits the Widget index page and asserts that some expected text is indeed present on the page. You can grab the repo here if you want see the basic app skeleton that this spec is testing.

The Problem

When we run the tests above in isolation they will pass. For test_1_spec.rb:

    
> bundle exec rspec --format=progress spec/test_1_spec.rb                    
.
Finished in 0.01868 seconds (files took 1.76 seconds to load)
1 example, 0 failures
    
  
For test_2_spec.rb:
    
> bundle exec rspec --format=progress spec/test_2_spec.rb
..
Finished in 0.11597 seconds (files took 1.76 seconds to load)
2 examples, 0 failures
    
  
However, running the tests consecutively leads to a failure in the last test:
    
> bundle exec rspec --format=progress spec/test_1_spec.rb spec/test_2_spec.rb
..F

Failures:

  1) Widgets index should display the widgets index page
     Failure/Error: within("h1") { expect(page.body).to have_content("Widgets") }
     
     Capybara::ElementNotFound:
       Unable to find css "h1"
    
  
But if we run the tests in the opposite order, then there are no problems:
    
> bundle exec rspec --format=progress spec/test_2_spec.rb spec/test_1_spec.rb
...
Finished in 0.11879 seconds (files took 1.83 seconds to load)
3 examples, 0 failures
    
  
So this prompts the question: What the spec is going on?

The Explanation

When the spec that was causing the issue was identified, i.e. test_1_spec.rb in this case, I immediately went in and started tweaking things to try and understand the problem: I deleted tests from the file, I deleted before blocks etc. I uncovered two unexpected things in this process: if we rename the @controller variable to something else, all specs pass, e.g. this will work:

    
# spec/test_1_spec.rb

require 'rails_helper'

describe ApplicationController, type: :controller do

  class MockApplicationController < ApplicationController
    def some_action
      "Blah"
    end
  end

  describe '#some_action' do
    before :each do
      @a_controller = MockApplicationController.new
    end

    it "should return expected String value" do
      expect(@a_controller.some_action).to eq "Blah"
    end
  end
end
    
  
Perhaps even more bizarrely, if we change the outer describe message to take a string description, rather than the class name ApplicationController then this also fixes things, e.g. this works:
    
# spec/test_1_spec.rb

require 'rails_helper'

describe "ApplicationController", type: :controller do

  class MockApplicationController < ApplicationController
    def some_action
      "Blah"
    end
  end

  describe '#some_action' do
    before :each do
      @controller = MockApplicationController.new
    end

    it "should return expected String value" do
      expect(@controller.some_action).to eq "Blah"
    end
  end
end
    
  
With these observations banked, we switch focus to the failing spec in test_2_spec.rb:
    
    …
    it "should display the widgets index page" do
      visit "/widgets"
      within("h1") { expect(page.body).to have_content("Widgets") }
    end
    …
    
  
And the failure message we are presented with is:
    
     Failure/Error: within("h1") { expect(page.body).to have_content("Widgets") }
     
     Capybara::ElementNotFound:
       Unable to find css "h1"
    
  
If we break into this test with a debugger we learn that page.body is an empty string, when it should contain our rendered template. The line of code being tested at this point is the render :index call at the end of the WidgetsController#index action:
    
class WidgetsController < ApplicationController
  def index
    @widgets = Widget.all
    render :index
  end
end
    
  
So why is this returning an empty string in the case of test failure? Let's put a debug statement within this action as follows:
    
  def index
    @widgets = Widget.all
    puts lookup_context.view_paths[0].class
    render :index
  end
    
  
The lookup_context is an instance of ActionView::LookupContext and it encapsulates the logic used to resolve templates in Rails. It is exposed within our controller via the ActionView::Rendering module. The view_paths method on the ActionView::LookupContext returns an ordered array of strategies that are iterated through when resolving a template. We can equate this to the different paths that Rails will search for a template within our project structure. If we run our tests with this additional debugging we see the following:
    
> bundle exec rspec spec/test_1_spec.rb spec/test_2_spec.rb

ApplicationController
  #some_action
    should return expected String value

Widgets
  index
RSpec::Rails::ViewRendering::EmptyTemplateResolver::ResolverDecorator  <== Hmm
    should return 200 status
RSpec::Rails::ViewRendering::EmptyTemplateResolver::ResolverDecorator  <== Hmm
    should display the widgets index page (FAILED - 1)
    
  
If we flip the order of the tests we instead see this:
    
bundle exec rspec spec/test_2_spec.rb spec/test_1_spec.rb

Widgets
  index
ActionView::OptimizedFileSystemResolver  <== What we expect
    should return 200 status
ActionView::OptimizedFileSystemResolver  <== What we expect
    should display the widgets index page

ApplicationController
  #some_action
    should return expected String value
    
  
The RSpec::Rails::ViewRendering::EmptyTemplateResolver::ResolverDecorator is injected to the top the the view_paths array for controller specs. It does this to avoid the potentially time-consuming task of rendering view templates when you are running controller specs. But why has our lookup_context been mutated in test_2_spec.rb, which is a request/feature spec. This is not expected. To continue the investigation let us take a look at how rspec-rails tinkers with our lookup_context.

If we look at RSpec::Rails::ControllerExampleGroup it includes a module named RSpec::Rails::ViewRendering which will be responsible for the changes to our lookup_context:

    
    module ControllerExampleGroup
      extend ActiveSupport::Concern
      include RSpec::Rails::RailsExampleGroup
      include ActionController::TestCase::Behavior  <== This guy has a role in the story
      include RSpec::Rails::ViewRendering  <== This guy plays with the lookup_context
      include RSpec::Rails::Matchers::RedirectTo
      include RSpec::Rails::Matchers::RenderTemplate
      include RSpec::Rails::Matchers::RoutingMatchers
      include ControllerAssertionDelegator
    
  
This module, RSpec::Rails::ViewRendering achieves the effect by means of before and after hooks as shown:
    
      included do
        before do
          unless render_views?
            @_original_path_set = controller.class.view_paths
            path_set = @_original_path_set.map { |resolver| RESOLVER_CACHE[resolver] }

            puts "Controller class about to be hacked: #{controller.class}"
            controller.class.view_paths = path_set
            controller.extend(EmptyTemplates)
          end
        end

        after do
          puts "Controller class about to be reset: #{controller.class}"
          controller.class.view_paths = @_original_path_set unless render_views?
        end
      end
    
  
You can see that I have added a couple of debug statements into these before and after hooks. We run the specs again with these new debug statements and the cause of our problems becomes a little clearer:
    
> bundle exec rspec spec/test_1_spec.rb spec/test_2_spec.rb 

ApplicationController
  #some_action
Controller class about to be hacked: ApplicationController
Controller class about to be reset: MockApplicationController
    should return expected String value

Widgets
  index
    should return 200 status
    should display the widgets index page (FAILED - 1)
    
  
So when rspec-rails hacks the lookup_context with empty templates it does so for the ApplicationController. However, when it goes to un-hack the lookup_context, after the spec has run, it is applying the change to the MockApplicationController, which was a dummy class defined for the purpose of this specific test.

This explains why test_2_spec.rb will fail if it runs after test_1_spec.rb. This after hook in the controller spec (test_1_spec.rb) is not correctly tidying up after itself. It has stubbed the view rendering for ApplicationController to always return empty templates, but for some reason it is not correctly unwinding this change. As a result, when out feature spec tries to render a template an empty string is returned. So how have we managed to break the after hook in the controller test?

Well, if we look in the same RSpec::Rails::ViewRendering file, we can see that the controller method is just an attribute accessor for the @controller instance variable. And that instance variable is exposed via the module ActionController::TestCase::Behavior. In fact if we look at the documentation in this file, here, we see the following:

  
  # == Special instance variables
  #
  # ActionController::TestCase will also automatically provide the following instance
  # variables for use in the tests:
  #
  # @controller::
  #      The controller instance that will be tested.
  # @request::
  #      An ActionController::TestRequest, representing the current HTTP
  #      request. You can modify this object before sending the HTTP request. For example,
  #      you might want to set some session properties before sending a GET request.
  # @response::
  #      An ActionDispatch::TestResponse object, representing the response
  #      of the last HTTP response. In the above example, @response becomes valid
  #      after calling +post+. If the various assert methods are not sufficient, then you
  #      may use this object to inspect the HTTP response in detail.
  

So this explains it. We have used an @controller instance variable within our own test setup:
    
require 'rails_helper'

describe ApplicationController, type: :controller do

  class MockApplicationController < ApplicationController
    def some_action
      "Blah"
    end
  end

  describe '#some_action' do
    before :each do
      @controller = MockApplicationController.new
    end

    it "should return expected String value" do
      expect(@controller.some_action).to eq "Blah"
    end
  end
end
    
  
This has unintentionally clobbered the @controller instance variable exposed by ActionController::TestCase::Behavior, so rspec-rails ends up unable to reset the lookup_context at the end of the controller specs, meaning we will continue to render empty templates thereafter. After all of that investigation the solution is simple: use a different iname for our instance variable in our test setup.

Summary

If you value your time and your sanity, don't use the instance variable @controller inside controller specs, it will ruin your life.

Comments

There are no existing comments

Got your own view or feedback? Share it with us below …

×

Subscribe

Join our mailing list to hear when new content is published to the VectorLogic blog.
We promise not to spam you, and you can unsubscribe at any time.