C++ Unit Testing - FAQ

From Second Life Wiki
Jump to navigation Jump to search

Some questions I asked myself and the answers from the knowledgeable folks who helped me:

What does LL_ADD_PROJECT_UNIT_TESTS do really?

This is described in the "CMake" section of the C++ Unit Testing - How It Works page, and of course in the code itself, if you're really curious.

What gets executed exactly?

The executable itself PROJECT_project_TEST_foo.exe is stored along with the project executable(s). When building the RelWithDebInfo viewer for instance, your test executable will be in \indra\build-vc80\newview\RelWithDebInfo.

The test executable is ran pretty much "as is" with a log file as output argument.

How can I execute one unit test individually?

Several methods:

  • Build and run the PROJECT_project_TEST_foo target
  • Build the project_tests target. Building that target actually executes the code, and builds all the tests on that project as a dependency.
  • Build the PROJECT_project_TEST_foo target and manually run the resulting PROJECT_project_TEST_foo binary. This enables you to use debuggers, profiling tools, etc.

What's happening with precompiled headers?

If you're having trouble with precompiled headers, you need to add the procompiled header in question to your test build.

For instance, this is the example of solving that issue with newview:

# Add tests
include(LLAddBuildTest)
set(viewer_TEST_SOURCE_FILES
  llworldmipmap.cpp
  )
set_source_files_properties(
  ${viewer_TEST_SOURCE_FILES}
  PROPERTIES
    LL_TEST_ADDITIONAL_SOURCE_FILES llviewerprecompiledheaders.cpp
  )
LL_ADD_PROJECT_UNIT_TESTS(viewer "${viewer_TEST_SOURCE_FILES}")

Any project that uses precompiled headers (hint: almost all of them) will need to add the PCH source to the testing list for all sources in the test project.

What does the "project" argument in the macro mean?

Looking into LLAddBuildTest.cmake, you may wonder what the "project" parameter (the first member in the LL_ADD_PROJECT_UNIT_TESTS macro) stands for. It is used to add build dependencies between the tests and a project target. For instance, when adding to newview, you want to add that dependency to the viewer project so that your test gets executed when that project is built. You should be using the project that the source you are testing is primarily compiling into.

How do I test a private method?

The short answer is: don't.

The primary value of your unit tests isn't for you, the original coder -- it's for other coders who will much later touch some parts of your class. Your unit tests ensure that the public behavior of your class -- the behavior on which other classes depend -- doesn't change in unexpected ways.

We explicitly want a maintenance coder to be able to change, refactor or remove a private method without having to tweak your tests.

Can I unit test several classes defined in the same file?

Yes, you can. Simply repeat the tut pattern as required for each class. Don't forget to restart the test numbering to 1 for each class.

I'm not 100% sure it's kosher but I find it very, very helpful. In one case, I have a class that's rather atomic (a little bit more complex than a scalar type say) and those objects are then handled by a more complex object as std::vectors and std::map. The real test for the complex class is to know if I screwed up or not in my handling of the containers and the objects in there (i.e. does the interface of the principal class creates, searches and cleans up those containers correctly?). Having all of them under the same roof allows for non trivial tests to be written.

That being said, don't abuse this to test a mixture of apples and oranges (though it's quite good in salad with cinnamon but that's another subject...). That's a good moment to think about the value of having those classes in the same file.

How can I make sure my tests are correct?

Write your test so that they fail when coding them, then clean up and fix them.

I got to scratch my head and wonder if a "pass" test does indeed mean anything. What if it always passes? I found the best is to write your test and, initially, write an ensure test that fails. Some patterns I used:

  • methods returning a success value (boolean or other): simply test with the inverse value you're expecting in the ensure. Also write at least 2 calls in the same test, one that will return true and the other false for instance so that you cover the fact that the method can and does return both.
  • setters and accessors: I use the set method and immediately use the get method in the ensure test. First fail the test by comparing the return value with something obviously wrong then fix. You also get the satisfaction of exercising 2 calls in one test.

Can I put several sub tests in one "::test" method?

Yes but don't abuse it cause there's a cost to this.

The reason you want to do that is that test create a lot of boilerplate code. If you test something like, building a map for instance, you'll need to populate and clear that map for each individual test. My nose bleeds after too many push ups like that... I usually group relevant tests into one single "::test" method.

The catcher though you need to know is that the execution fails at the first failed test. So if you put 30 ensure calls in one single test function and the first ensure fails, you'll get only 1 error and the error message for that first failed ensure. Then you might be in for a long debug session fixing each through an annoying "fix, build, run" process. So think about the time someone will ping you on IRC 3 months from now and say that your unit test fails and they need your help. If you have only 1 error message to help you, that might be too little to evaluate the problem.

Think about grouping your tests into logical units. For instance I usually do a test for all the relatively simple and safe setters and accessors (the things that change the state of the object). Then I do a different test for each group of containers (if any of course) making sure that clearing, inserting, iterating work as expected. I create other groups of tests as adequate. For a class with 20 public methods, I end up with 4 or 5 tests, each having 10 or so sub tests (ensure or fail) in them. That seems to be good.

Should I stub everything?

Yes, well, everything that shows up as link errors at least. If you end up having to stub too many things, it could be a sign that you need to cut your code into several modules. What all this stubbing shows is that the dependency of your code on the rest is really really hairy...

The question is, what code are you unit-testing? Usually the answer is: the specific methods that you just added or changed. In that case, you want to stub external classes. Put differently: if you're working on class A, which depends on a large class B, and you drag in all of B in order to test A, then you're really testing A+B rather than A alone. If your unit test later breaks -- quick, in which class do you look? Better to test A alone -- and presumably later add unit tests for B alone.

Shouldn't I use simulators?

Yes for some classes that are so basic as be considered as scalar types (this is what we do in newview with the precompiled classes for instance). For more complex categories, that might be justifiable and is done or under way for things like fake messaging.

Please coordinate fake classes of general utility on the relevant mailing lists.

How can I test a method that returns "void"?

I choose a try/catch trick but, that works only if the code raises exceptions, so it's not likely to do much good if it does crash. Other choices is to always return something (a status) or use a "check consistency" method on the class. Also testing for state changes on the object is a valid way of testing this kind of method.

Michael Feathers talks about how to sense the behavior of a void method (see Useful Links on the main Unit tests page).

How can I write "non trivial" tests?

That's the frustrating part at the beginning as, except for methods that do perform a true computation with parameters passed, it's hard to exercise any meaningful code in a completely context free sense. The answer is may be "simulators" or "fake classes" (see above). There's however quite a lot you ought to be doing. Think about the following:

  • Can you test the state of the object? If you can, test this after calls that return void for instance or seem to do little.
  • Can you create a fail test? Standing on itself, the object should be able to react with some consistency and tells that things don't work when called. That's a valid test.
  • Can you sequence some calls and test? It's often that a call by itself does little useful but sequencing a bunch do get the object in an interesting state. Build those cases and test that the object does reach the intended state.