Testing different implementations of an interface

Added by Gerd Hirsch 11 months ago

Hello Cute Developers,

Problem:
Testing different implementations (System under Test SUT) of an interface

Solution: (as it´s possible with JUnit)
define the tests as "Template Methods (GoF)" in an abstract TestBaseClass and create the SUT in the tests via "Factory Method (GoF)":
interfaceName createSUT().
For each Implementation provide a subclass of the TestBaseClass and the factory method: interfaceName createSUT(){ return Implementation }
and use createSUT() in the tests

CUTE: you have to repeat the registration of all tests of the TestBaseClass in make_suite() of the derived test.
My Solution with CUTE is a Macro defined in the TestBaseClass and used in the subclass:

class DemoBaseTest {
public:
virtual InterfaceName getSUT() = 0;
void test1();
void test2();
};

#define TestBaseClass(DerivedClass) \
s.push_back(CUTE_SMEMFUN(DerivedClass, test1)); \
s.push_back(CUTE_SMEMFUN(DerivedClass, test2));

In Derived DefaultBaseTest:
inline
cute::suite DefaultBaseTest::make_suite(){
cute::suite s;
BaseTests(DefaultBaseTest)
return s;
}

But I don´t like Makros!

Question:
How can this be done better with CUTE?

Thanks for supporting
and for the development for this nice testing framework and the eclipse integration


Replies (4)

RE: Testing different implementations of an interface - Added by Thomas Corbat 11 months ago

Hi Gerd

If I understand you correctly you'd like to get rid of the TestBaseClass macro, right?

I suggest you use something like the following:

template<typename DERIVED>
cute::suite create_suite() {
  cute::suite suite{};
  suite.push_back(CUTE_SMEMFUN(DERIVED, test1));
  suite.push_back(CUTE_SMEMFUN(DERIVED, test2));
  //...
  return suite;
}

You can create suites for your derived types as follows:

  cute::suite s = create_suite<DerivedFromDemoBaseTest>();

Does this help?

Regards
Thomas

RE: Testing different implementations of an interface - Added by Gerd Hirsch 10 months ago

hi thomas,
thanks, that helps, I´m annoyed about not to hit by myself on it. My solution can be found on https://github.com/GerdHirsch
another problem is, my tests are template itself, I suppose, that´s the reason why it´s not possible to navigate from successful tests in the "Test Result" View to the corresponding sourcecode,
what is possible with not template (normal) classes.

RE: Testing different implementations of an interface - Added by Thomas Corbat 10 months ago

Gerd Hirsch wrote:

hi thomas,
thanks, that helps, I´m annoyed about not to hit by myself on it. My solution can be found on https://github.com/GerdHirsch
another problem is, my tests are template itself, I suppose, that´s the reason why it´s not possible to navigate from successful tests in the "Test Result" View to the corresponding sourcecode,
what is possible with not template (normal) classes.

Yes, that is an inconvenience inherited from the macros (__FILE__ and __LINE__) used to determine the ASSERT location. I expect the navigation to end up in the template, right? While it is actually the source code that causes the test to fail, it origins from a template instance. The source code of the instances is the same for all of them.

It is a bit tricky to change this behavior. You would need to parameterize the file and line somehow. The problem here is the origin of this information. Currently, we use the ASSERT macro for that. If I got it right, you would like to jump to the location of the CUTE macro call that registers the test in the suite. Adapting CUTE to this behavior would require to pass that information from the CUTE call through the test functor to the ASSERT. That will crack up the whole implementation and requires your tests to pass that information to the ASSERT itself.

You could also extend your make_suite function to take __FILE__ and __LINE__ and pass it to the created test functors. This would require a change in the construction of the test functors. Currently, CUTE instanciates them (with the default constructor). You would need to create the instances in make_suite yourself and store that information. Later you could pass it to custom ASSERT macros that don't figure out the line and file themselves, but just take this information as parameter.

I don't think either solution is great. So I suggest to stick with some other means of identifying the failing component, e.g. naming. If you want to get the failing component in the expected-actual comparison output, you might wrap the ASSERT macro with another ASSERT that prepends the name of the component (if you can identify it somehow, or use typeid otherwise) to the message of the ASSERT.

We actually encountered a similar problem while testing refactoring plug-ins for Cevelop. The actions performed (executing an automated refactoring) usually are the same but we run them on many different inputs. We took out the target source code into refactoring tests suite files (just plain C++ code). With this approach we have the same issue in JUnit as we have in CUTE with templates containing @ASSERT@s. We eventually ended up writing an extension for the Eclipse JUnit component to provide the possibility to jump into our refactoring test suites. But that might not be the way you want to go either. :)

(1-4/4)