Features of CUTE

Many enhancements have been added to CUTE for more convenience within the Eclipse plug-in, e.g. support for C++11 and more modern compilers for easier and more comfortable test specification. The following things have been added to CUTE:

  • Test with relational operators and plug-in diff view ability on failure
  • Support for data-driven tests
  • Filtering to run specific tests or suites by their name (this is used by the plug-in now)
  • XML output in JUnit compatible format (suite names as given to the runner and test names as registered)
  • Attempt to support running CUTE on devices where iostream would be too much overhead (std::string is needed, however)

Tests with Relational Inequality Operators

There were some complaints that CUTE did not allow to compare with other relational operators and still get the nice value output from ASSERT_EQUAL. We heard that complaint and now provide macros for comparing for inequality that will populate the nice diff view in the plug-in with the values compared.

Look for example at the following test:

ASSERT(x > y);

If it fails, you have no idea which values x and y caused the failure. You can either resolve to a debugger to determine that, or add your own mechanism to put the values in a message string first in any case.

std::ostringstream out;
out << "x was " << x << " and y was " << y;
ASSERTM(out.str(),x<y);

This tends to be clumsy and you don't get the nice diff viewer that helps great if x and y are longer strings. With CUTE you'll get all relational operators supported as macros and if any of those assertions fail, they generate output compatible with the CUTE plug-in test viewer that will show the values in a nice diff view if clicked. The spelling of the macros follows the names of the C++ standard library functors but in upper case and prefixed with ASSERT_, for example as follows:

void test_cute_assert_greater_equal_success(){
    int const x=4;
    int const y=4;
    ASSERT_GREATER_EQUAL(x,y);
}

Data-Driven Tests

Often you find writing the same test code again and again, if you want to test a set of values and expected results. Good style tells you to refactor your code and put the input and corresponding expected values in a table and have a single test function interpret that table. This comes at a price: When a test fails, you don't know which exact table entry actually made it fail. Adding output of the values helps, but still you won't be able to quickly locate the culprit in the table, for example, if it was a false positive and you need to fix the expected value. Now you can have the best of both worlds, table-driven tests that tell you the location of the table input in addition to the tests function name. Clicking on a failed test will put you into the table entry that caused the failure. If everything works fine, you end up on the table-interpreting function. It is best shown with an example:

struct test_eq_data {
    double input,expected;
    cute::test_failure failure; // add a failure element to your table data
} eq_table [] = { // define the test table
    { 4,16,DDT() }, // use the macro DDT() to mark your table entry
        { 2.5,6.25,DDTM("compare well?")}
};
double square(double x){return x*x;}

void test_cute_data_driven_equality_demo(){
    test_eq_data const*const end=eq_table+sizeof(eq_table)/sizeof(*eq_table);
    for(test_eq_data *it=eq_table; it != end; ++it){
        ASSERT_EQUAL_DDT(it->expected,square(it->input),it->failure);
    } // use macros with suffix _DDT to use the table entries, last part is the test failure memorized by DDT()
}

Test Filtering

The default main program created by the plug-in as well as the test runner are now parameterized with arguments argc and argv. Giving test or suite names on the command line now will only execute the named tests. A test within a suite is named using the suite name, a hash/pound symbol ('#') and the test name. If you have existing CUTE main programs, you need to adapt them accordingly.

bool runAllTests(int argc, char const *argv[]) {
    cute::suite s { };
    s.push_back(CUTE(thisIsATest));
    cute::xml_file_opener xmlfile(argc, argv);
    cute::xml_listener<cute::ide_listener<>> lis(xmlfile.out);
    auto runner = cute::makeRunner(lis, argc, argv);
    bool success = runner(s, "AllTests");
    return success;
}

int main(int argc, char const *argv[]) {
    return runAllTests(argc, argv) ? EXIT_SUCCESS : EXIT_FAILURE;
}

Running the above program as...

$ cutest 'AllTests#thisIsATest'

...will select and run the single test. You can pass any number of test names on the command line. But again, the Eclipse plug-in provides a more convenient method to re-run failing tests.

XML Output

The above runAllTests function will also create an XML file named like the program with an extension of .xml. If you pass argc as zero the filename will be testresult.xml. This makes it more convenient to put your test executable into the build process of a build server that expects JUnit-compatible XML format, such as Jenkins.

CUTE Tests on Small Embedded Devices

This is an experimental feature to allow running CUTE tests using ASSERT_EQUAL to run on limited hardware, where using iostream for numeric conversions is too expensive. If you compile your CUTE tests using...

#define DONT_USE_IOSTREAM 1

...respectively the compiler command-line setting of the macro with "-DDONT_USE_IOSTREAM=1" then CUTE shouldn't include iostream headers. However, you might need to adjust your main function or runner as well, for example, to pass the output of the tests over a serial line instead. Feedback on the feasibility of that feature is highly welcome.