January 2010 Archives

RVM + Bundler + RubyMine

On my current project, we needed to be able to use RVM with Bundler but we want access to our bundled gems from Rubymine.

You’re going to want to write an alias in your .bashrc/.zshrc/.whatevershrc for this one. You can use most of the below:

export PROJECT_PATH=the_path_to_your_Rails_project_using_bundler_goes_here
alias rubymine="rvm ruby-1.8.6-tv1_8_6_287 ;\
export GEM_HOME=#{PROJECT_PATH}/vendor/bundler_gems/ruby/1.8;\
export GEM_PATH=${GEM_HOME};/Applications/RubyMine\ 2.0.1.app/Contents/MacOS/rubymine"

What we’re doing here:

  1. We then override the gem repo to point to our bundled gems (which are conveniently formatted as a gem repo)
  2. Then we fire up Ruby Mine with the above environment (Thanks to Brennan Dunn here).

Now close and reopen your shell.

From there, you go into RubyMine’s settings, click on “Add SDK”…

… unhide the hidden directories like so…

… then navigate to your Ruby VM and hit OK:


Posted by evan on Jan 29, 2010

Mocking mocking (or "Why I am learning to hate isolating the unit under test")

Mocking basically sucks.

There. I said it.

Using mocks in your tests almost always results in fragile tests.


A case against mocking

Let’s say that you’re working on a hypothetical (simplified) financial application:

class AccountTest < ActiveSupport::TestCase
  setup do
    @balance = 42.00
    @account = factory_to_create_a_test_account :with_balance => @balance
    @mock_bank_manager = mock(BankManager)

  def test_overdrawing_account_sends_notification
    @mock_manager.expects(:notify_of, :overdrawn_account, :amount => 1)
    @account.withdraw(@balance + 1)

The above example is attempting to illustrate how mocks can be useful for specifying a causal relationship. “Overdrawing the account” results in “notifying the bank manager of the amount over balance”.

Mocking seems like such a natural and even expressive way to design an API. It’s, quite literally, behavior driven development: your tests/specifications, where you are mocking the API, are helping you design the API itself! That’s terrific.

So let’s pretend that, like a good TDDer, I’ve gone and implemented the BankManager class.

class BankManager
  def notify_of(event_type, options = {})
    case event_type:
    when :notify_of
      # send the notification
    when ...

Perhaps I even wrote a test/spec for BankManager. But, then, in a fit of refactoring rage, I decide that I must have a second argument to BankManager#notify_of:

class BankManager
  def notify_of(event_type, account, options = {})
    case event_type:
    when :notify_of
      # send the notification
    when ...

Our old friend AccountTest above will still pass because it’s using a mock. However, the API to BankManager has changed; we want AccountTest#test_overdrawing_account_sends_notification to fail!

So, in short, mocking an internal API is a recipe for pain and (Ni!) woe.

  1. After you implement the API, you still have mocks lying around in the test that helped you mock out the API design in the first place. This test is now fragile. If the API changes, the tests containing the mocks will still pass!
  2. You can double back and replace those mocks with actual calls to the API but you just made more work for yourself

Where mocking makes sense

In my experience, the only place that mocks have served me at all well is when I’m interfacing with an external service from a unit test. I certainly don’t want my unit test invoking services beyond my own system. I usually write an interface layer between my business logic and the external service. In my unit tests, I then mock the interface layer only. Testing integration with the external service, predictably, becomes a chore solely for the integration test.

We have the technology. We can rebuild it.

I believe that there is a better path: one that will let us have our mocking cake but now force us to eat brittle tests. I’ve had some ideas about this on the back burner for about a year now. However, I hope to have something concrete to discuss in a few weeks.

Posted by evan on Jan 25, 2010

On Craftsmanship and Practice

Reading a passage from the “E-Myth Contractor” got me to thinking about how we practice (when we practice) our skills that we apply on a regular basis.

When we practice our craft, performing “katas” as they have come to be called, why do we perform them on arcane problems such as Langdon’s Ant or Conway’s Game of Life?

You don’t encounter these problems in your day-to-day work.

I agree that solving these problems a few times over may improve your overall problem solving skills. But that’s only true until you settle on an optimal implementation.

Given the above then there seems to be greater value in routinely exercising what we consider routine.

If I can build a signup, login,and forgotten password capability, a text-based search across multiple model objects, or a recurring payment ecommerce system rapidly and reliably, isn’t that more valuable to most customers than finding clever ways to move a hypothetical ant around a grid? These are the sorts of tasks that we routinely encounter in our work. Or perhaps not. Maybe you typically employ a CMS to expedite these chores. This is because craftsmen use tools to work in their craft.

So you should be practicing with those same tools.

If you have a good toolbox, full of tools ideally suited to solving problems your typical problems, then these tools are your weapons. Each tool probably does certain things better than others. You should then practice “weapon katas.”

You should master your tools.

Let’s assume for a moment that your current project/product/service du jour is not a unique and special snowflake. If what we do is a craft, then repetition and understanding of the routine tasks should enable us to deliver faster, more reliably, and more consistently.

Perhaps studying Langdon’s and Conway’s, ultimately, is a study of basic forms, i.e. this is how I BDD something different. Once we grasp these basic forms, it is then time to move on to how we employ our tools, i.e., our favorite plugins and gems, until we’ve mastered those as well.

Doesn’t this make us better craftsmen?

… that is, until someone introduces a better weapon.

I freely admit that this is not how I currently practice. I feel that my basic form is solid. Howver, I admit, I do need to better acquaint myself with my weapons. As of now, this is what I intend to practice. I will try to report on how it goes.

Posted by evan on Jan 11, 2010

Shouldn't developer tools be for developers first?

As (I believe) Jonas Nicklas pointed out in response to ThoughtBot’s post on integration testing, Cucumber is largely ineffectual in a project until you build up a library of domain-specific matchers and their respective code blocks.

What this represents is external domain-specific language design via regular expression.

The reasonable use of this is to facilitate the expression of desired behaviors by non-developers. For example, I have tell of organizations where QA people, for instance, write automated acceptance tests using Cucumber.

I share the Cucumber team’s belief that customer communication is essential. That’s facilitated, in Cucumber, by writing features in plain text (Gherkin). Cost is saved when issues are addressed in specs versus code – likely why the waterfall model was so popular at first. It’s just common sense.

It’s a form of risk management.

Every feature goes hand in hand with the risk that it will fail to be implemented to meet customer expectation. A risk becomes an issue, in this case, when a feature is implemented in a fashion differing from the customer’s expectation. The earlier that risk is mitigated, the less money/effort/time is wasted on the project handling risks that manifest into issues.

One way of mitigating that risk is to communicate to the customer how that feature will behave in the language of the customer’s domain.

Coulda solves the same problem. It has a rake task for marshaling the Feature, contained Scenarios, and all of the statements (Givens/Whens/Thens) into plain text.

However, Coulda shares many of the considerations Dan cites above:

  • “Features” are just Test::Unit::TestCases
  • “Scenarios” are just tests
  • “Statements” (Given/When/Then) are just steps taken within a particular test

When I use Coulda on a project (I’ve used it on the job at this point), I don’t start with the intent of building a language. I know that I probably will, because I have, as my work progresses. I start by writing pending Features, pending Scenarios, pending Statements, and then I begin putting flesh on the bones. I refactor my tests. When I encounter duplication within a Feature, I Extract Method. When I encounter duplication across Features, I Extract Module (I’m guessing that’s in Ruby Refactoring but you likely get the intent: I create a module, move my method into the module, and include the module in my Coulda Features), and so on.

What bothers me so much about so many libraries, in general, is that they try to solve all of my problems.

That’s great but I don’t want that. Just like I don’t want to buy a car with a 500hp engine because I simply won’t need it. I just need a car.

So I kept Coulda simple. If you need more, great, then write it! If you find that you need the same thing repeatedly, great, make a new gem from it!

Automate what you need to do often. If you have an edge case, keep it out of your libraries, thank you! My brain has a hard enough time absorbing the ever-increasing size of the Rails API (and this from a guy who worked in J2EE hell for years).

Remember, these are the kinds of practices that gave us Rails. Don’t try to solve everyone’s problems. Focus on the frequently recurring problems. Scratch your own itch (I did!).

Posted by evan on Jan 04, 2010