xref: /llvm-project/lldb/docs/resources/test.rst (revision 2ab98dfe19ac384f0cfac1a1fafc56b9dd7ad9b7)
1Testing
2=======
3
4Test Suite Structure
5--------------------
6
7The LLDB test suite consists of three different kinds of test:
8
9* **Unit tests**: written in C++ using the googletest unit testing library.
10* **Shell tests**: Integration tests that test the debugger through the command
11  line. These tests interact with the debugger either through the command line
12  driver or through ``lldb-test`` which is a tool that exposes the internal
13  data structures in an easy-to-parse way for testing. Most people will know
14  these as *lit tests* in LLVM, although lit is the test driver and ShellTest
15  is the test format that uses ``RUN:`` lines. `FileCheck
16  <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify
17  the output.
18* **API tests**: Integration tests that interact with the debugger through the
19  SB API. These are written in Python and use LLDB's ``dotest.py`` testing
20  framework on top of Python's `unittest
21  <https://docs.python.org/3/library/unittest.html>`_.
22
23All three test suites use ``lit`` (`LLVM Integrated Tester
24<https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test
25suites can be run as a whole or separately.
26
27
28Unit Tests
29``````````
30
31Unit tests are located under ``lldb/unittests``. If it's possible to test
32something in isolation or as a single unit, you should make it a unit test.
33
34Often you need instances of the core objects such as a debugger, target or
35process, in order to test something meaningful. We already have a handful of
36tests that have the necessary boiler plate, but this is something we could
37abstract away and make it more user friendly.
38
39Shell Tests
40```````````
41
42Shell tests are located under ``lldb/test/Shell``. These tests are generally
43built around checking the output of ``lldb`` (the command line driver) or
44``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
45write because they require little boilerplate.
46
47``lldb-test`` is a relatively new addition to the test suite. It was the first
48tool that was added that is designed for testing. Since then it has been
49continuously extended with new subcommands, improving our test coverage. Among
50other things you can use it to query lldb for symbol files, for object files
51and breakpoints.
52
53Obviously shell tests are great for testing the command line driver itself or
54the subcomponents already exposed by lldb-test. But when it comes to LLDB's
55vast functionality, most things can be tested both through the driver as well
56as the Python API. For example, to test setting a breakpoint, you could do it
57from the command line driver with ``b main`` or you could use the SB API and do
58something like ``target.BreakpointCreateByName`` [#]_.
59
60A good rule of thumb is to prefer shell tests when what is being tested is
61relatively simple. Expressivity is limited compared to the API tests, which
62means that you have to have a well-defined test scenario that you can easily
63match with ``FileCheck``. Though Shell tests can be run remotely, behavior
64specific to remote debugging must be tested with API tests instead.
65
66Another thing to consider are the binaries being debugged, which we call
67inferiors. For shell tests, they have to be relatively simple. The
68``dotest.py`` test framework has extensive support for complex build scenarios
69and different variants, which is described in more detail below, while shell
70tests are limited to single lines of shell commands with compiler and linker
71invocations.
72
73On the same topic, another interesting aspect of the shell tests is that there
74you can often get away with a broken or incomplete binary, whereas the API
75tests almost always require a fully functional executable. This enables testing
76of (some) aspects of handling of binaries with non-native architectures or
77operating systems.
78
79Finally, the shell tests always run in batch mode. You start with some input
80and the test verifies the output. The debugger can be sensitive to its
81environment, such as the platform it runs on. It can be hard to express
82that the same test might behave slightly differently on macOS and Linux.
83Additionally, the debugger is an interactive tool, and the shell test provide
84no good way of testing those interactive aspects, such as tab completion for
85example.
86
87API Tests
88`````````
89
90API tests are located under ``lldb/test/API``. They are run with the
91``dotest.py``. Tests are written in Python and test binaries (inferiors) are
92compiled with Make. The majority of API tests are end-to-end tests that compile
93programs from source, run them, and debug the processes.
94
95As mentioned before, ``dotest.py`` is LLDB's testing framework. The
96implementation is located under ``lldb/packages/Python/lldbsuite``. We have
97several extensions and custom test primitives on top of what's offered by
98`unittest <https://docs.python.org/3/library/unittest.html>`_. Those can be
99found  in
100`lldbtest.py <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbtest.py>`_.
101
102Below is the directory layout of the `example API test
103<https://github.com/llvm/llvm-project/tree/main/lldb/test/API/sample_test>`_.
104The test directory will always contain a python file, starting with ``Test``.
105Most of the tests are structured as a binary being debugged, so there will be
106one or more source files and a ``Makefile``.
107
108::
109
110  sample_test
111  ├── Makefile
112  ├── TestSampleTest.py
113  └── main.c
114
115Let's start with the Python test file. Every test is its own class and can have
116one or more test methods, that start with ``test_``.  Many tests define
117multiple test methods and share a bunch of common code. For example, for a
118fictive test that makes sure we can set breakpoints we might have one test
119method that ensures we can set a breakpoint by address, on that sets a
120breakpoint by name and another that sets the same breakpoint by file and line
121number. The setup, teardown and everything else other than setting the
122breakpoint could be shared.
123
124Our testing framework also has a bunch of utilities that abstract common
125operations, such as creating targets, setting breakpoints etc. When code is
126shared across tests, we extract it into a utility in ``lldbutil``. It's always
127worth taking a look at  `lldbutil
128<https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbutil.py>`_
129to see if there's a utility to simplify some of the testing boiler plate.
130Because we can't always audit every existing test, this is doubly true when
131looking at an existing test for inspiration.
132
133It's possible to skip or `XFAIL
134<https://ftp.gnu.org/old-gnu/Manuals/dejagnu-1.3/html_node/dejagnu_6.html>`_
135tests using decorators. You'll see them a lot. The debugger can be sensitive to
136things like the architecture, the host and target platform, the compiler
137version etc. LLDB comes with a range of predefined decorators for these
138configurations.
139
140::
141
142  @expectedFailureAll(archs=["aarch64"], oslist=["linux"]
143
144Another great thing about these decorators is that they're very easy to extend,
145it's even possible to define a function in a test case that determines whether
146the test should be run or not.
147
148::
149
150  @skipTestIfFn(checking_function_name)
151
152In addition to providing a lot more flexibility when it comes to writing the
153test, the API test also allow for much more complex scenarios when it comes to
154building inferiors. Every test has its own ``Makefile``, most of them only a
155few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a
156thousand lines of rules takes care of most if not all of the boiler plate,
157while individual make files can be used to build more advanced tests.
158
159Here's an example of a simple ``Makefile`` used by the example test.
160
161::
162
163  C_SOURCES := main.c
164  CFLAGS_EXTRAS := -std=c99
165
166  include Makefile.rules
167
168Finding the right variables to set can be tricky. You can always take a look at
169`Makefile.rules <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/make/Makefile.rules>`_
170but often it's easier to find an existing ``Makefile`` that does something
171similar to what you want to do.
172
173Another thing this enables is having different variants for the same test
174case. By default, we run every test for two debug info formats, once with
175DWARF from the object files and another with a dSYM on macOS or split
176DWARF (DWO) on Linux. But there are many more things we can test
177that are orthogonal to the test itself. On GreenDragon we have a matrix bot
178that runs the test suite under different configurations, with older host
179compilers and different DWARF versions.
180
181As you can imagine, this quickly lead to combinatorial explosion in the number
182of variants. It's very tempting to add more variants because it's an easy way
183to increase test coverage. It doesn't scale. It's easy to set up, but increases
184the runtime of the tests and has a large ongoing cost.
185
186The test variants are most useful when developing a larger feature (e.g. support
187for a new DWARF version). The test suite contains a large number of fairly
188generic tests, so running the test suite with the feature enabled is a good way
189to gain confidence that you haven't missed an important aspect. However, this
190genericness makes them poor regression tests. Because it's not clear what a
191specific test covers, a random modification to the test case can make it start
192(or stop) testing a completely different part of your feature. And since these
193tests tend to look very similar, it's easy for a simple bug to cause hundreds of
194tests to fail in the same way.
195
196For this reason, we recommend using test variants only while developing a new
197feature. This can often be done by running the test suite with different
198arguments -- without any modifications to the code. You can create a focused
199test for any bug found that way. Often, there will be many tests failing, but a
200lot of then will have the same root cause.  These tests will be easier to debug
201and will not put undue burden on all other bots and developers.
202
203In conclusion, you'll want to opt for an API test to test the API itself or
204when you need the expressivity, either for the test case itself or for the
205program being debugged. The fact that the API tests work with different
206variants mean that more general tests should be API tests, so that they can be
207run against the different variants.
208
209Guidelines for API tests
210^^^^^^^^^^^^^^^^^^^^^^^^
211
212API tests are expected to be fast, reliable and maintainable. To achieve this
213goal, API tests should conform to the following guidelines in addition to normal
214good testing practices.
215
216**Don't unnecessarily launch the test executable.**
217    Launching a process and running to a breakpoint can often be the most
218    expensive part of a test and should be avoided if possible. A large part
219    of LLDB's functionality is available directly after creating an `SBTarget`
220    of the test executable.
221
222    The part of the SB API that can be tested with just a target includes
223    everything that represents information about the executable and its
224    debug information (e.g., `SBTarget`, `SBModule`, `SBSymbolContext`,
225    `SBFunction`, `SBInstruction`, `SBCompileUnit`, etc.). For test executables
226    written in languages with a type system that is mostly defined at compile
227    time (e.g., C and C++) there is also usually no process necessary to test
228    the `SBType`-related parts of the API. With those languages it's also
229    possible to test `SBValue` by running expressions with
230    `SBTarget.EvaluateExpression` or the ``expect_expr`` testing utility.
231
232    Functionality that always requires a running process is everything that
233    tests the `SBProcess`, `SBThread`, and `SBFrame` classes. The same is true
234    for tests that exercise breakpoints, watchpoints and sanitizers.
235    Languages such as Objective-C that have a dependency on a runtime
236    environment also always require a running process.
237
238**Don't unnecessarily include system headers in test sources.**
239    Including external headers slows down the compilation of the test executable
240    and it makes reproducing test failures on other operating systems or
241    configurations harder.
242
243**Avoid specifying test-specific compiler flags when including system headers.**
244    If a test requires including a system header (e.g., a test for a libc++
245    formatter includes a libc++ header), try to avoid specifying custom compiler
246    flags if possible. Certain debug information formats such as ``gmodules``
247    use a cache that is shared between all API tests and that contains
248    precompiled system headers. If you add or remove a specific compiler flag
249    in your test (e.g., adding ``-DFOO`` to the ``Makefile`` or ``self.build``
250    arguments), then the test will not use the shared precompiled header cache
251    and expensively recompile all system headers from scratch. If you depend on
252    a specific compiler flag for the test, you can avoid this issue by either
253    removing all system header includes or decorating the test function with
254    ``@no_debug_info_test`` (which will avoid running all debug information
255    variants including ``gmodules``).
256
257**Test programs should be kept simple.**
258    Test executables should do the minimum amount of work to bring the process
259    into the state that is required for the test. Simulating a 'real' program
260    that actually tries to do some useful task rarely helps with catching bugs
261    and makes the test much harder to debug and maintain. The test programs
262    should always be deterministic (i.e., do not generate and check against
263    random test values).
264
265**Identifiers in tests should be simple and descriptive.**
266    Often test programs need to declare functions and classes which require
267    choosing some form of identifier for them. These identifiers should always
268    either be kept simple for small tests (e.g., ``A``, ``B``, ...) or have some
269    descriptive name (e.g., ``ClassWithTailPadding``, ``inlined_func``, ...).
270    Never choose identifiers that are already used anywhere else in LLVM or
271    other programs (e.g., don't name a class  ``VirtualFileSystem``, a function
272    ``llvm_unreachable``, or a namespace ``rapidxml``) as this will mislead
273    people ``grep``'ing the LLVM repository for those strings.
274
275**Prefer LLDB testing utilities over directly working with the SB API.**
276    The ``lldbutil`` module and the ``TestBase`` class come with a large amount
277    of utility functions that can do common test setup tasks (e.g., starting a
278    test executable and running the process to a breakpoint). Using these
279    functions not only keeps the test shorter and free of duplicated code, but
280    they also follow best test suite practices and usually give much clearer
281    error messages if something goes wrong. The test utilities also contain
282    custom asserts and checks that should be preferably used (e.g.
283    ``self.assertSuccess``).
284
285**Prefer calling the SB API over checking command output.**
286    Avoid writing your tests on top of ``self.expect(...)`` calls that check
287    the output of LLDB commands and instead try calling into the SB API. Relying
288    on LLDB commands makes changing (and improving) the output/syntax of
289    commands harder and the resulting tests are often prone to accepting
290    incorrect test results. Especially improved error messages that contain
291    more information might cause these ``self.expect`` calls to unintentionally
292    find the required ``substrs``. For example, the following ``self.expect``
293    check will unexpectedly pass if it's ran as the first expression in a test:
294
295::
296
297    self.expect("expr 2 + 2", substrs=["0"])
298
299When running the same command in LLDB the reason for the unexpected success
300is that '0' is found in the name of the implicitly created result variable:
301
302::
303
304    (lldb) expr 2 + 2
305    (int) $0 = 4
306           ^ The '0' substring is found here.
307
308A better way to write the test above would be using LLDB's testing function
309``expect_expr`` will only pass if the expression produces a value of 0:
310
311::
312
313    self.expect_expr("2 + 2", result_value="0")
314
315**Prefer using specific asserts over the generic assertTrue/assertFalse.**.
316    The ``self.assertTrue``/``self.assertFalse`` functions should always be your
317    last option as they give non-descriptive error messages. The test class has
318    several expressive asserts such as ``self.assertIn`` that automatically
319    generate an explanation how the received values differ from the expected
320    ones. Check the documentation of Python's ``unittest`` module to see what
321    asserts are available. LLDB also has a few custom asserts that are tailored
322    to our own data types.
323
324+-----------------------------------------------+-----------------------------------------------------------------+
325| **Assert**                                    | **Description**                                                 |
326+-----------------------------------------------+-----------------------------------------------------------------+
327| ``assertSuccess``                             | Assert that an ``lldb.SBError`` is in the "success" state.      |
328+-----------------------------------------------+-----------------------------------------------------------------+
329| ``assertState``                               | Assert that two states (``lldb.eState*``) are equal.            |
330+-----------------------------------------------+-----------------------------------------------------------------+
331| ``assertStopReason``                          | Assert that two stop reasons (``lldb.eStopReason*``) are equal. |
332+-----------------------------------------------+-----------------------------------------------------------------+
333
334    If you can't find a specific assert that fits your needs and you fall back
335    to a generic assert, make sure you put useful information into the assert's
336    ``msg`` argument that helps explain the failure.
337
338::
339
340    # Bad. Will print a generic error such as 'False is not True'.
341    self.assertTrue(expected_string in list_of_results)
342    # Good. Will print expected_string and the contents of list_of_results.
343    self.assertIn(expected_string, list_of_results)
344
345**Do not use hard-coded line numbers in your test case.**
346
347Instead, try to tag the line with some distinguishing pattern, and use the function line_number() defined in lldbtest.py which takes
348filename and string_to_match as arguments and returns the line number.
349
350As an example, take a look at test/API/functionalities/breakpoint/breakpoint_conditions/main.c which has these
351two lines:
352
353.. code-block:: c
354
355        return c(val); // Find the line number of c's parent call here.
356
357and
358
359.. code-block:: c
360
361    return val + 3; // Find the line number of function "c" here.
362
363The Python test case TestBreakpointConditions.py uses the comment strings to find the line numbers during setUp(self) and use them
364later on to verify that the correct breakpoint is being stopped on and that its parent frame also has the correct line number as
365intended through the breakpoint condition.
366
367**Take advantage of the unittest framework's decorator features.**
368
369These features can be use to properly mark your test class or method for platform-specific tests, compiler specific, version specific.
370
371As an example, take a look at test/API/lang/c/forward/TestForwardDeclaration.py which has these lines:
372
373.. code-block:: python
374
375    @no_debug_info_test
376    @skipIfDarwin
377    @skipIf(compiler=no_match("clang"))
378    @skipIf(compiler_version=["<", "8.0"])
379    @expectedFailureAll(oslist=["windows"])
380    def test_debug_names(self):
381        """Test that we are able to find complete types when using DWARF v5
382        accelerator tables"""
383        self.do_test(dict(CFLAGS_EXTRAS="-gdwarf-5 -gpubnames"))
384
385This tells the test harness that unless we are running "linux" and clang version equal & above 8.0, the test should be skipped.
386
387**Class-wise cleanup after yourself.**
388
389TestBase.tearDownClass(cls) provides a mechanism to invoke the platform-specific cleanup after finishing with a test class. A test
390class can have more than one test methods, so the tearDownClass(cls) method gets run after all the test methods have been executed by
391the test harness.
392
393The default cleanup action performed by the packages/Python/lldbsuite/test/lldbtest.py module invokes the "make clean" os command.
394
395If this default cleanup is not enough, individual class can provide an extra cleanup hook with a class method named classCleanup ,
396for example, in test/API/terminal/TestSTTYBeforeAndAfter.py:
397
398.. code-block:: python
399
400    @classmethod
401    def classCleanup(cls):
402        """Cleanup the test byproducts."""
403        cls.RemoveTempFile("child_send1.txt")
404
405
406The 'child_send1.txt' file gets generated during the test run, so it makes sense to explicitly spell out the action in the same
407TestSTTYBeforeAndAfter.py file to do the cleanup instead of artificially adding it as part of the default cleanup action which serves to
408cleanup those intermediate and a.out files.
409
410CI
411--
412
413LLVM Buildbot is the place where volunteers provide machines for building and
414testing. Everyone can `add a buildbot for LLDB <https://llvm.org/docs/HowToAddABuilder.html>`_.
415
416An overview of all LLDB builders can be found here:
417
418`https://lab.llvm.org/buildbot/#/builders?tags=lldb <https://lab.llvm.org/buildbot/#/builders?tags=lldb>`_
419
420Building and testing for macOS uses a different platform called GreenDragon. It
421has a dedicated tab for LLDB: `https://green.lab.llvm.org/job/llvm.org/view/LLDB/
422<https://green.lab.llvm.org/job/llvm.org/view/LLDB/>`_
423
424
425Running The Tests
426-----------------
427
428.. note::
429
430   On Windows any invocations of python should be replaced with python_d, the
431   debug interpreter, when running the test suite against a debug version of
432   LLDB.
433
434.. note::
435
436   On NetBSD you must export ``LD_LIBRARY_PATH=$PWD/lib`` in your environment.
437   This is due to lack of the ``$ORIGIN`` linker feature.
438
439Running the Full Test Suite
440```````````````````````````
441
442The easiest way to run the LLDB test suite is to use the ``check-lldb`` build
443target.
444
445::
446
447   $ ninja check-lldb
448
449Changing Test Suite Options
450```````````````````````````
451
452By default, the ``check-lldb`` target builds the test programs with the same
453compiler that was used to build LLDB. To build the tests with a different
454compiler, you can set the ``LLDB_TEST_COMPILER`` CMake variable.
455
456You can also add to the test runner options by setting the
457``LLDB_TEST_USER_ARGS`` CMake variable. This variable uses ``;`` to separate
458items which must be separate parts of the runner's command line.
459
460It is possible to customize the architecture of the test binaries and compiler
461used by appending ``-A`` and ``-C`` options respectively. For example, to test
462LLDB against 32-bit binaries built with a custom version of clang, do:
463
464::
465
466   $ cmake -DLLDB_TEST_USER_ARGS="-A;i386;-C;/path/to/custom/clang" -G Ninja
467   $ ninja check-lldb
468
469Note that multiple ``-A`` and ``-C`` flags can be specified to
470``LLDB_TEST_USER_ARGS``.
471
472If you want to change the LLDB settings that tests run with then you can set
473the ``--setting`` option of the test runner via this same variable. For example
474``--setting;target.disable-aslr=true``.
475
476For a full list of test runner options, see
477``<build-dir>/bin/lldb-dotest --help``.
478
479Running a Single Test Suite
480```````````````````````````
481
482Each test suite can be run separately, similar to running the whole test suite
483with ``check-lldb``.
484
485* Use ``check-lldb-unit`` to run just the unit tests.
486* Use ``check-lldb-api`` to run just the SB API tests.
487* Use ``check-lldb-shell`` to run just the shell tests.
488
489You can run specific subdirectories by appending the directory name to the
490target. For example, to run all the tests in ``ObjectFile``, you can use the
491target ``check-lldb-shell-objectfile``. However, because the unit tests and API
492tests don't actually live under ``lldb/test``, this convenience is only
493available for the shell tests.
494
495Running a Single Test
496`````````````````````
497
498The recommended way to run a single test is by invoking the lit driver with a
499filter. This ensures that the test is run with the same configuration as when
500run as part of a test suite.
501
502::
503
504   $ ./bin/llvm-lit -sv <llvm-project-root>/lldb/test --filter <test>
505
506
507Because lit automatically scans a directory for tests, it's also possible to
508pass a subdirectory to run a specific subset of the tests.
509
510::
511
512   $ ./bin/llvm-lit -sv <llvm-project-root>/lldb/test/Shell/Commands/CommandScriptImmediateOutput
513
514
515For the SB API tests it is possible to forward arguments to ``dotest.py`` by
516passing ``--param`` to lit and setting a value for ``dotest-args``.
517
518::
519
520   $ ./bin/llvm-lit -sv <llvm-project-root>/lldb/test --param dotest-args='-C gcc'
521
522
523Below is an overview of running individual test in the unit and API test suites
524without going through the lit driver.
525
526Running a Specific Test or Set of Tests: API Tests
527``````````````````````````````````````````````````
528
529In addition to running all the LLDB test suites with the ``check-lldb`` CMake
530target above, it is possible to run individual LLDB tests. If you have a CMake
531build you can use the ``lldb-dotest`` binary, which is a wrapper around
532``dotest.py`` that passes all the arguments configured by CMake.
533
534Alternatively, you can use ``dotest.py`` directly, if you want to run a test
535one-off with a different configuration.
536
537For example, to run the test cases defined in TestInferiorCrashing.py, run:
538
539::
540
541   $ ./bin/lldb-dotest -p TestInferiorCrashing.py
542
543::
544
545   $ cd $lldb/test
546   $ python dotest.py --executable <path-to-lldb> -p TestInferiorCrashing.py ../packages/Python/lldbsuite/test
547
548If the test is not specified by name (e.g. if you leave the ``-p`` argument
549off),  all tests in that directory will be executed:
550
551
552::
553
554   $ ./bin/lldb-dotest functionalities/data-formatter
555
556::
557
558   $ python dotest.py --executable <path-to-lldb> functionalities/data-formatter
559
560Many more options that are available. To see a list of all of them, run:
561
562::
563
564   $ python dotest.py -h
565
566
567Running a Specific Test or Set of Tests: Unit Tests
568```````````````````````````````````````````````````
569
570The unit tests are simple executables, located in the build directory under ``tools/lldb/unittests``.
571
572To run them, just run the test binary, for example, to run all the Host tests:
573
574::
575
576   $ ./tools/lldb/unittests/Host/HostTests
577
578
579To run a specific test, pass a filter, for example:
580
581::
582
583   $ ./tools/lldb/unittests/Host/HostTests --gtest_filter=SocketTest.DomainListenConnectAccept
584
585
586Running the Test Suite Remotely
587```````````````````````````````
588
589Running the test-suite remotely is similar to the process of running a local
590test suite, but there are two things to have in mind:
591
5921. You must have the lldb-server running on the remote system, ready to accept
593   multiple connections. For more information on how to setup remote debugging
594   see the Remote debugging page.
5952. You must tell the test-suite how to connect to the remote system. This is
596   achieved using the ``LLDB_TEST_PLATFORM_URL``, ``LLDB_TEST_PLATFORM_WORKING_DIR``
597   flags to cmake, and ``--platform-name`` parameter to ``dotest.py``.
598   These parameters correspond to the platform select and platform connect
599   LLDB commands. You will usually also need to specify the compiler and
600   architecture for the remote system.
6013. Remote Shell tests execution is currently supported only for Linux target
602   platform. It's triggered when ``LLDB_TEST_SYSROOT`` is provided for building
603   test sources. It can be disabled by setting ``LLDB_TEST_SHELL_DISABLE_REMOTE=On``.
604   Shell tests are not guaranteed to pass against remote target if the compiler
605   being used is other than Clang.
606
607
608Running tests in QEMU System Emulation Environment
609``````````````````````````````````````````````````
610
611QEMU can be used to test LLDB in an emulation environment in the absence of
612actual hardware. :doc:`/resources/qemu-testing` describes how to setup an
613emulation environment using QEMU helper scripts found in
614``llvm-project/lldb/scripts/lldb-test-qemu``. These scripts currently
615work with Arm or AArch64, but support for other architectures can be added easily.
616
617Debugging Test Failures
618-----------------------
619
620On non-Windows platforms, you can use the ``-d`` option to ``dotest.py`` which
621will cause the script to print out the pid of the test and wait for a while
622until a debugger is attached. Then run ``lldb -p <pid>`` to attach.
623
624To instead debug a test's python source, edit the test and insert ``import pdb; pdb.set_trace()`` or ``breakpoint()`` (Python 3 only) at the point you want to start debugging. The ``breakpoint()`` command can be used for any LLDB Python script, not just for API tests.
625
626In addition to pdb's debugging facilities, lldb commands can be executed with the
627help of a pdb alias. For example ``lldb bt`` and ``lldb v some_var``. Add this
628line to your ``~/.pdbrc``:
629
630::
631
632   alias lldb self.dbg.HandleCommand("%*")
633
634Debugging Test Failures on Windows
635``````````````````````````````````
636
637On Windows, it is strongly recommended to use Python Tools for Visual Studio
638for debugging test failures. It can seamlessly step between native and managed
639code, which is very helpful when you need to step through the test itself, and
640then into the LLDB code that backs the operations the test is performing.
641
642A quick guide to getting started with PTVS is as follows:
643
644#. Install PTVS
645#. Create a Visual Studio Project for the Python code.
646    #. Go to File -> New -> Project -> Python -> From Existing Python Code.
647    #. Choose llvm/tools/lldb as the directory containing the Python code.
648    #. When asked where to save the .pyproj file, choose the folder ``llvm/tools/lldb/pyproj``. This is a special folder that is ignored by the ``.gitignore`` file, since it is not checked in.
649#. Set test/dotest.py as the startup file
650#. Make sure there is a Python Environment installed for your distribution. For example, if you installed Python to ``C:\Python35``, PTVS needs to know that this is the interpreter you want to use for running the test suite.
651    #. Go to Tools -> Options -> Python Tools -> Environment Options
652    #. Click Add Environment, and enter Python 3.5 Debug for the name. Fill out the values correctly.
653#. Configure the project to use this debug interpreter.
654    #. Right click the Project node in Solution Explorer.
655    #. In the General tab, Make sure Python 3.5 Debug is the selected Interpreter.
656    #. In Debug/Search Paths, enter the path to your ninja/lib/site-packages directory.
657    #. In Debug/Environment Variables, enter ``VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\``.
658    #. If you want to enabled mixed mode debugging, check Enable native code debugging (this slows down debugging, so enable it only on an as-needed basis.)
659#. Set the command line for the test suite to run.
660    #. Right click the project in solution explorer and choose the Debug tab.
661    #. Enter the arguments to dotest.py.
662    #. Example command options:
663
664::
665
666   --arch=i686
667   # Path to debug lldb.exe
668   --executable D:/src/llvmbuild/ninja/bin/lldb.exe
669   # Directory to store log files
670   -s D:/src/llvmbuild/ninja/lldb-test-traces
671   -u CXXFLAGS -u CFLAGS
672   # If a test crashes, show JIT debugging dialog.
673   --enable-crash-dialog
674   # Path to release clang.exe
675   -C d:\src\llvmbuild\ninja_release\bin\clang.exe
676   # Path to the particular test you want to debug.
677   -p TestPaths.py
678   # Root of test tree
679   D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test
680
681::
682
683   --arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess
684
685.. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_
686