Delayed_job is a great Ruby solution for executing jobs asynchronously. It is intended to be run in the background, dispatching jobs that are persisted in a table. If you are using Cucumber you have to consider how the dispatching process is launched when your features are executed.

My first attempt after googling for this question was to create a custom Cucumber step that launched the execution of the jobs.

Given /^Jobs are being dispatched$/ do
  Delayed::Worker.new.work_off
end

In this approach, we have a specific Cucumber step for indicating when do we want to dispatch jobs. This step will be executed synchronously in the same Cucumber thread, so you have to invoke it after some step has introduced a new job in the queue and before the verification steps:

When I perform some action (that makes the server to create a new job)
    And Jobs are being dispatched


 I should see the expected results

I think this approach is not very convenient:

  • Cucumber is intended to be used for writing integration tests. Tests that describe your application from the point of view of its users. Ideally, they should only manipulate the application inputs and verify its outputs through the UI. A user of your application will never need to know you are using a job dispatcher on your server.

  • While controlling the exact (and synchronous) execution of jobs makes writing tests easier, it doesn’t represent the temporal randomness which is in the very nature of an asynchronous job dispatcher. In my opinion, it is good that Cucumber features verify that this randomness is correctly handled (in some controlled limits).

I think a better approach is launching the jobs in the background, simulating the normal execution environment of your application. The idea is very simple: the job worker is started before each Cucumber scenario and is stopped after it. Cucumber tags represent a good choice for implementing these hooks. In this way, you can easily activate delayed_job only for the scenarios that need it.

When implementing this approach, I found a lot of problems for providing a proper RAILS_ENV=cucumber to the delayed_job command. In fact, I wasn’t able to make it work using launching the command script/delayed_job start from a Cucumber step. RAILS_ENV was simply ignored. What I finally did was executing the rake task directly.

Before('@background-jobs') do
    system "/usr/bin/env RAILS_ENV=cucumber rake jobs:work &"
end

For stopping the jobs I had the same RAILS_ENV issue using script/delayed_job stop. I ended up killing the job processes using a parametrized kill command.

After('@background-jobs') do
    system "ps -ef | grep 'rake jobs:work' | grep -v grep | awk '{print $2}' | xargs kill -9"
end

Using this approach you can get rid of specific steps for delayed_job. Instead, you just have to tag with @background-jobs the features/scenarios that needed it.

As a conclusion, I think that using background jobs in Cucumber is a better approach in general terms. I would only use the synchronous work_off approach for special cases.