extensions.md (7487B)
1 ## Extensions 2 3 The doctest header doesn't include any external or stdlib headers in it's interface part in order to provide the most optimal build times but that means it is limited in what it can provide as functionality => that's when extensions come into play. They are located as header files in [`doctest/extensions`](../../doctest/extensions) and each of them is documented in a section here. 4 5 # [Utils](../../doctest/extensions/doctest_util.h) 6 7 nothing here yet... 8 9 # [Distributed tests with MPI](../../doctest/extensions/doctest_mpi.h) 10 11 [Bruno Maugars and Bérenger Berthoul, ONERA] 12 13 Testing code over distributed processes requires support from the testing framework. **Doctest** support for MPI parallel communication is provided in the ```"doctest/extensions/doctest_mpi.h"``` header. 14 15 ## Example 16 17 See [**the complete test**](../../examples/mpi/mpi.cpp) and [**the configuration of main()**](../../examples/mpi/main.cpp) 18 19 ### MPI_TEST_CASE 20 21 ```c++ 22 #include "doctest/extensions/doctest_mpi.h" 23 24 int my_function_to_test(MPI_Comm comm) { 25 int rank; 26 MPI_Comm_rank(comm,&rank); 27 if (rank == 0) { 28 return 10; 29 } 30 return 11; 31 } 32 33 34 MPI_TEST_CASE("test over two processes",2) { // Parallel test on 2 processes 35 int x = my_function_to_test(test_comm); 36 37 MPI_CHECK( 0, x==10 ); // CHECK for rank 0, that x==10 38 MPI_CHECK( 1, x==11 ); // CHECK for rank 1, that x==11 39 } 40 ``` 41 42 An ```MPI_TEST_CASE``` is like a regular ```TEST_CASE```, except it takes a second argument, which is the number of processes needed to run the test. If the number of processes is less than 2, the test will fail. If the number of processes is greater than or equal to 2, it will create a sub-communicator over 2 processes, called ```test_comm```, and execute the test over these processes. Three objects are provided by ```MPI_TEST_CASE```: 43 * ```test_comm```, of type ```MPI_Comm```: the mpi communicator on which the test is running, 44 * ```test_rank``` and ```test_nb_procs```, two ```int``` giving respectively the rank of the current process and the size of the communicator for ```test_comm```. These last two are just here for convenience and could be retrieved from ```test_comm```. 45 46 We always have: 47 48 ```c++ 49 MPI_TEST_CASE("my_test",N) { 50 CHECK( test_nb_procs == N ); 51 MPI_CHECK( i, test_rank==i ); // for any i<N 52 } 53 ``` 54 55 ### Assertions 56 It is possible to use regular assertions in an ```MPI_TEST_CASE```. MPI-specific assertions are also provided and are all prefixed with ```MPI_``` (```MPI_CHECK```, ```MPI_ASSERT```...). The first argument is the rank for which they are checked, and the second is the usual expression to check. 57 58 ## The main entry points and mpi reporters 59 60 You need to launch the unit tests with an ```mpirun``` or ```mpiexec``` command: 61 ``` 62 mpirun -np 2 unit_test_executable.exe 63 ``` 64 65 ```doctest::mpi_init_thread()``` must be called before running the unit tests, and ```doctest::mpi_finalize()``` at the end of the program. Also, using the default console reporter will result in each process writing everything in the same place, which is not what we want. Two reporters are provided and can be enabled. A complete ```main()``` would be: 66 67 68 ```c++ 69 #define DOCTEST_CONFIG_IMPLEMENT 70 71 #include "doctest/extensions/doctest_mpi.h" 72 73 int main(int argc, char** argv) { 74 doctest::mpi_init_thread(argc,argv,MPI_THREAD_MULTIPLE); // Or any MPI thread level 75 76 doctest::Context ctx; 77 ctx.setOption("reporters", "MpiConsoleReporter"); 78 ctx.setOption("reporters", "MpiFileReporter"); 79 ctx.setOption("force-colors", true); 80 ctx.applyCommandLine(argc, argv); 81 82 int test_result = ctx.run(); 83 84 doctest::mpi_finalize(); 85 86 return test_result; 87 } 88 ``` 89 90 ### MpiConsoleReporter 91 92 The ```MpiConsoleReporter``` should be substituted to the default reporter. It does the same as the default console reporter for regular assertions, but only outputs on process 0. For MPI test cases, if there is a failure it tells the process that failed 93 94 ``` 95 [doctest] doctest version is "2.4.0" 96 [doctest] run with "--help" for options 97 =============================================================================== 98 [doctest] test cases: 171 | 171 passed | 0 failed | 0 skipped 99 [doctest] assertions: 864 | 864 passed | 0 failed | 100 [doctest] Status: SUCCESS! 101 std_e_mpi_unit_tests 102 [doctest] doctest version is "2.4.0" 103 [doctest] run with "--help" for options 104 =============================================================================== 105 path/to/test.cpp:30: 106 TEST CASE: my test case 107 108 On rank [2] : path/to/test.cpp:35: CHECK( x==-1 ) is NOT correct! 109 values: CHECK( 0 == -1 ) 110 111 =============================================================================== 112 [doctest] test cases: 2 | 2 passed | 0 failed | 0 skipped 113 [doctest] assertions: 2 | 2 passed | 0 failed | 114 [doctest] Status: SUCCESS! 115 =============================================================================== 116 [doctest] assertions on all processes: 5 | 4 passed | 1 failed | 117 =============================================================================== 118 [doctest] fail on rank: 119 -> On rank [2] with 1 test failed 120 [doctest] Status: FAILURE! 121 ``` 122 123 If the test executable is launch with less processes than the number of processes required by one test, the test is skipped and marqued as such in the mpi console reporter: 124 125 126 ```c++ 127 MPI_TEST_CASE("my_test",3) { 128 // ... 129 } 130 ``` 131 132 ``` 133 mpirun -np 2 unit_test_executable.exe 134 ``` 135 136 ``` 137 =============================================================================== 138 [doctest] test cases: 1 | 1 passed | 0 failed | 1 skipped 139 [doctest] assertions: 1 | 1 passed | 0 failed | 140 [doctest] Status: SUCCESS! 141 =============================================================================== 142 [doctest] assertions on all processes: 1 | 1 passed | 0 failed | 143 [doctest] WARNING: Skipped 1 test requiring more than 2 MPI processes to run 144 =============================================================================== 145 ``` 146 147 ### MpiFileReporter 148 The ```MpiFileReporter``` will just print the result of each process in its own file, named ```doctest_[rank].log```. Only use this reporter as a debug facility if you want to know what is going on exactly when a parallel test case is failing. 149 150 ### Other reporters 151 Other reporters (jUnit, XML) are not supported directly, which mean that you can always print the result of each process to its own file, but there is (currently) no equivalent of the ```MpiConsoleReporter``` that will aggregate the results of all processes. 152 153 154 ## Note 155 156 This feature is provided to unit-test mpi-distributed code. It is **not** a way to parallelize many unit tests over several processes (for that, see [**the example python script**](../../examples/range_based_execution.py)). 157 158 ## TODO 159 160 * Pass ```s``` member variable of ```ConsoleReporter``` as an argument to member functions so we can use them with another object (would help to factorize ```MPIConsoleReporter```) 161 * Only MPI_CHECK tested. MPI_REQUIRE, exception handling: nothing tested 162 * More testing, automatic testing 163 * Packaging: create a new target ```mpi_doctest```? (probably cleaner to depend explicitly on MPI for mpi/doctest.h) 164 * Later, maybe: have a general mechanism to represent assertions so we can separate the report format (console, xml, junit...) from the reporting strategy (sequential vs. MPI) 165 166 --------------- 167 168 [Home](readme.md#reference) 169 170 <p align="center"><img src="../../scripts/data/logo/icon_2.svg"></p>