xref: /minix3/external/bsd/bind/dist/unit/atf-src/TODO (revision 00b67f09dd46474d133c95011a48590a8e8f94c7)
1Things to do                                    Automated Testing Framework
2===========================================================================
3
4
5Last revised: November 30th, 2010
6
7
8This document includes the list of things that need to be done in ATF that
9are most requested by the users. This information used to be available in
10an ad-hoc bug tracker but that proved to be a bad idea. I have collected
11all worthy comments in here.
12
13Please note that most work these days is going into Kyua (see
14http://code.google.com/p/kyua/). The ideas listed here apply to the
15components of ATF that have *not* been migrated to the new codebase yet.
16For bug reports or ideas that apply to the components that already have
17been migrated, please use the bug tracker in the URL above. Similarly,
18whenever a component is migrated, the ideas in this file should be revised
19and migrated to the new bug tracker where appropriate.
20
21
22---------------------------------------------------------------------------
23Add build-time checks to atf-sh
24
25The 0.7 release introduced build-time tests to atf-c and atf-c++, but not
26to atf-sh. Expose the functionality to the shell interface.
27
28This will probably require writing an atf-build utility that exposes the C
29code and can be called from the shell.
30
31---------------------------------------------------------------------------
32Revisit what to do when an Atffile lists a non-existent file
33
34---------------------------------------------------------------------------
35Add ATF_CHECK* versions to atf-c++ to support non-fatal tests
36
37---------------------------------------------------------------------------
38Implement race-condition tests
39
40gcooper:
41
42I would think that stress/negative tests would be of more value than race
43condition tests (they're similar, but not exactly the same in my mind).
44
45In particular,
46
471. Feed through as much data as possible to determine where reporting
48   breaks down.
492. Feed through data quickly and terminate ASAP. The data should be
50   captured. Terminate child applications with unexpected exit codes and
51   signals (in particular, SIGCHLD, SIGPIPE, exit codes that terminate,
52   etc).
533. Open up a file descriptor in the test application, don't close the file
54   descriptor.
554. fork(2) a process; don't wait(2) for the application to complete.
56
57There are other scenarios that could be exercised, but these are the ones
58I could think of off the topic of my head.
59
60--
61
62jmmv:
63
641. The thing is: how do you express any of this in a portable/abstract
65   interface? How do you express that a test case "receives data"? What
66   does that exactly mean? I don't think the framework should care about
67   this: each test should be free to decide where its data is and how to
68   deal with it.
69
702. Ditto.
71
723. Not sure I understand your request, but testing for "unexpected exit
73   codes" is already supported. See wiki:DesignXFail for the feature
74   design details.
75
764. What's the problem with this case? The test case exits right away after
77   terminating the execution of its body; any open file descriptors,
78   leaked memory, etc. die with it.
79
805. forking and not waiting for a subprocess was a problem already
81   addressed.
82
83I kinda have an idea of what Antti means with "race condition tests", but
84every time I have tried to describe my understanding of matters I seem to
85be wrong. Would be nice to have a clear description of what this involves;
86in particular, what are the expectations from the framework and how should
87the feature be exposed.
88
89As of now, what I understand by "race condition test" is: a test case that
90exercises a race condition. The test case may finish without triggering
91the race, in which case it just exists with a successful status.
92Otherwise, if the race triggers, the test case gets stuck and times out.
93The result should be reported as an "expected failure" different from
94timeout.
95
96--
97
98pooka:
99
100Yup. Plus some atf-wide mechanism for the operator to supply some kind of
101guideline if the test should try to trigger the race for a second or for
102an hour.
103
104--
105
106jmmv:
107
108Alright. While mocking up some code for this, I think that your two
109requests are complementary.
110
111On the one hand, when you are talking about a "race condition" test you
112really mean an "expected race condition" test. Correct? If so, we need to
113extend the xfail mechanism to add one more case, which is to report any
114failures as a race condition error and, if there is no failure, report the
115test as successful.
116
117On the other hand, the atf-wide mechanism to support how long the test
118should run for can be thought as a "stress test" mechanism. I.e. run this
119test for X time / iterations and report its results regularly without
120involving xfail at all.
121
122So, with this in mind:
123
124* For a test that triggers an unfixed race condition, you set xfail to
125  race mode and define the test as a stress test. Any failures are
126  reported as expected failures.
127
128* For a test that verifies a supposedly-fixed race condition, you do *not*
129  set xfail to race mode, and only set the test to stress test. Any
130  failures are reported as real failures.
131
132These stress test cases implement a single iteration of the test and
133atf-run is in charge of running the test several times, stopping on the
134first failure.
135
136Does that make sense?
137
138---------------------------------------------------------------------------
139Implement ATF_REQUIRE_ERRNO
140
141pooka:
142
143Most of the lines in tests against system functionality are:
144
145if (syscall(args) == -1)
146    atf_tc_fail_errno("flop")
147
148Some shorthand would be helpful, like ATF_REQUIRE_ERRNO(syscall(args))
149Also, a variant which allows arbitrary return value checks (e.g. "!= 0" or
150"< 124" or "!= size") would be nice.
151
152--
153
154gcooper:
155
156There's a problem with this request; not all functions fail in the same
157way ... in particular compare the pthread family of functions (which
158return errno) vs many native syscalls. Furthermore, compare some
159fcntl-like syscalls vs other syscalls. One size fits all solutions may not
160be a wise idea in this case, so I think that the problem statement needs
161to be better defined, because the above request is too loose.
162
163FWIW, there's also a TEST macro in LTP, which tests for non-zero status,
164and sets an appropriate set of global variables for errnos and return
165codes, respectively. It was a good idea, but has been mostly abandoned
166because it's too difficult to define a success and failure in a universal
167manner, so I think that we need to be careful with what's implemented in
168ATF to not repeat the mistakes that others have made.
169
170--
171
172jmmv:
173
174I think you've got a good point.
175
176This was mostly intended to simplify the handling of the stupid errno
177global variable. I think this is valuable to have, but maybe the
178macro/function name should be different because _ERRNO can be confusing.
179Probably something like an ATF_CHECK_LIBC / ATF_CHECK_PTHREAD approach
180would be more flexible and simple.
181
182
183===========================================================================
184vim: filetype=text:textwidth=75:expandtab:shiftwidth=2:softtabstop=2
185