1We're in the process of converting the existing testsuite machinery to 2use the new style DejaGnu framework. Eventually, we'll abandon 3../mkcheck.in in favor of this new testsuite framework. 4 5// 1: Thoughts on naming test cases, and structuring them. 6The testsuite directory has been divided into 11 directories, directly 7correlated to the relevant chapters in the standard. For example, the 8directory testsuite/21_strings contains tests related to "Chapter 21, 9Strings library" in the C++ standard. 10 11So, the first step in making a new test case is to choose the correct 12directory. The second item is seeing if a test file exists that tests 13the item in question. Generally, within chapters test files are named 14after the section headings in ISO 14882, the C++ standard. For instance, 15 1621.3.7.9 Inserters and Extractors 17 18Has a related test case: 1921_strings/inserters_extractors.cc 20 21Not so hard. Some time, the words "ctor" and "dtor" are used instead 22of "construct", "constructor", "cons", "destructor", etc. Other than 23that, the naming seems mostly consistent. If the file exists, add a 24test to it. If it does not, then create a new file. All files are 25copyright the FSF, and GPL'd: this is very important. 26 27In addition, some of the locale and io code tests different 28instantiating types: thus, 'char' or 'wchar_t' is appended to the name 29as constructed above. 30 31Also, some test files are negative tests. That is, they are supposed 32to fail (usually this involves making sure some kind of construct gets 33an error when it's compiled.) These test files have 'neg' appended to 34the name as constructed above. 35 36Inside a test file, the plan is to test the relevant parts of the 37standard, and then add specific regressions as additional test 38functions, ie test04() can represent a specific regression noted in 39GNATS. Once test files get unwieldy or too big, then they should be 40broken up into multiple sub-categories, hopefully intelligently named 41after the relevant (and more specific) part of the standard. 42 43 44// 2: How to write a test case, from a dejagnu perspective 45As per the dejagnu instructions, always return 0 from main to indicate 46success. 47 48Basically, a test case contains dg-keywords (see dg.exp) indicating 49what to do and what kinds of behaviour are to be expected. New 50testcases should be written with the new style DejaGnu framework in 51mind. 52 53To ease transition, here is the list of dg-keyword documentation 54lifted from dg.exp -- eventually we should improve DejaGnu 55documentation, but getting checkin account currently demands Pyrrhic 56effort. 57 58# The currently supported options are: 59# 60# dg-prms-id N 61# set prms_id to N 62# 63# dg-options "options ..." [{ target selector }] 64# specify special options to pass to the tool (eg: compiler) 65# 66# dg-do do-what-keyword [{ target/xfail selector }] 67# `do-what-keyword' is tool specific and is passed unchanged to 68# ${tool}-dg-test. An example is gcc where `keyword' can be any of: 69# preprocess|compile|assemble|link|run 70# and will do one of: produce a .i, produce a .s, produce a .o, 71# produce an a.out, or produce an a.out and run it (the default is 72# compile). 73# 74# dg-error regexp comment [{ target/xfail selector } [{.|0|linenum}]] 75# indicate an error message <regexp> is expected on this line 76# (the test fails if it doesn't occur) 77# Linenum=0 for general tool messages (eg: -V arg missing). 78# "." means the current line. 79# 80# dg-warning regexp comment [{ target/xfail selector } [{.|0|linenum}]] 81# indicate a warning message <regexp> is expected on this line 82# (the test fails if it doesn't occur) 83# 84# dg-bogus regexp comment [{ target/xfail selector } [{.|0|linenum}]] 85# indicate a bogus error message <regexp> use to occur here 86# (the test fails if it does occur) 87# 88# dg-build regexp comment [{ target/xfail selector }] 89# indicate the build use to fail for some reason 90# (errors covered here include bad assembler generated, tool crashes, 91# and link failures) 92# (the test fails if it does occur) 93# 94# dg-excess-errors comment [{ target/xfail selector }] 95# indicate excess errors are expected (any line) 96# (this should only be used sparingly and temporarily) 97# 98# dg-output regexp [{ target selector }] 99# indicate the expected output of the program is <regexp> 100# (there may be multiple occurrences of this, they are concatenated) 101# 102# dg-final { tcl code } 103# add some tcl code to be run at the end 104# (there may be multiple occurrences of this, they are concatenated) 105# (unbalanced braces must be \-escaped) 106# 107# "{ target selector }" is a list of expressions that determine whether the 108# test succeeds or fails for a particular target, or in some cases whether the 109# option applies for a particular target. If the case of `dg-do' it specifies 110# whether the testcase is even attempted on the specified target. 111# 112# The target selector is always optional. The format is one of: 113# 114# { xfail *-*-* ... } - the test is expected to fail for the given targets 115# { target *-*-* ... } - the option only applies to the given targets 116# 117# At least one target must be specified, use *-*-* for "all targets". 118# At present it is not possible to specify both `xfail' and `target'. 119# "native" may be used in place of "*-*-*". 120 121Example 1: Testing compilation only 122(to just have a testcase do compile testing, without linking and executing) 123// { dg-do compile } 124 125Example 2: Testing for expected warings on line 36 126// { dg-warning "string literals" "" { xfail *-*-* } 36 127 128Example 3: Testing for compilation errors on line 41 129// { dg-do compile } 130// { dg-error "no match for" "" { xfail *-*-* } 41 } 131 132More examples can be found in the libstdc++-v3/testsuite/*/*.cc files. 133 134 135// 3: Test harness notes, invocation, and debugging. 136Configuring the dejagnu harness to work with libstdc++-v3 in a cross 137compilation environment has been maddening. However, it does work now, 138and on a variety of platforms. Including solaris, linux, and cygwin. 139 140To debug the test harness during runs, try invoking with 141 142make check-target-libstdc++-v3 RUNTESTFLAGS="-v" 143or 144make check-target-libstdc++-v3 RUNTESTFLAGS="-v -v" 145 146There are two ways to run on a simulator: set up DEJAGNU to point to a 147specially crafted site.exp, or pass down --target_board flags. 148 149Example flags to pass down for various embedded builds are as follows: 150 151--target=powerpc-eabism (libgloss/sim) 152make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=powerpc-sim" 153 154--target=calmrisc32 (libgloss/sid) 155make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=calmrisc32-sid" 156 157--target=xscale-elf (newlib/sim) 158make check-target-libstdc++-v3 RUNTESTFLAGS="--target_board=arm-sim" 159 160 161// 4: Future plans, to be done 162Shared runs need to be implemented, for targets that support shared libraries. 163 164Diffing of expected output to standard streams needs to be finished off. 165 166The V3 testing framework supports, or will eventually support, 167additional keywords for the purpose of easing the job of writing 168testcases. All V3-keywords are of the form @xxx@. Currently plans 169for supported keywords include: 170 171 @require@ <files> 172 The existence of <files> is essential for the test to complete 173 successfully. For example, a testcase foo.C using bar.baz as 174 input file could say 175 // @require@ bar.baz 176 The special variable % stands for the rootname, e.g. the 177 file-name without its `.C' extension. Example of use (taken 178 verbatim from 27_io/filebuf.cc) 179 // @require@ %-*.tst %-*.txt 180 181 @diff@ <first-list> <second-list> 182 After the testcase compiles and ran successfully, diff 183 <first-list> against <second-list>, these lists should have the 184 same length. The test fails if diff returns non-zero a pair of 185 files. 186 187Current testing problems with cygwin-hosted tools: 188 189There are two known problems which I have not addressed. The first is 190that when testing cygwin hosted tools from the unix build dir, it does 191the wrong thing building the wrapper program (testglue.c) because host 192and target are the same in site.exp (host and target are the same from 193the perspective of the target libraries) 194 195Problem number two is a little more annoying. In order for me to make 196v3 testing work on Windows, I had to tell dejagnu to copy over the 197debug_assert.h file to the remote host and then set the includes to 198-I./. This is how all the other tests like this are done so I didn't 199think much of it. However, this had some unfortunate results due to 200gcc having a testcase called "limits" and C++ having an include file 201called "limits". The gcc "limits" binary was in the temporary dir 202when the v3 tests were being built. As a result, the gcc "limits" 203binary was being #included rather than the intended one. The only way 204to fix this is to go through the testsuites and make sure binaries are 205deleted on the remote host when testing is done with them. That is a 206lot more work than I want to do so I worked around it by cleaning out 207D:\kermit on compsognathus and rerunning tests. 208