xref: /openbsd-src/gnu/llvm/lldb/docs/resources/test.rst (revision f6aab3d83b51b91c24247ad2c2573574de475a82)
1Testing
2=======
3
4.. contents::
5   :local:
6
7Test Suite Structure
8--------------------
9
10The LLDB test suite consists of three different kinds of test:
11
12* **Unit tests**: written in C++ using the googletest unit testing library.
13* **Shell tests**: Integration tests that test the debugger through the command
14  line. These tests interact with the debugger either through the command line
15  driver or through ``lldb-test`` which is a tool that exposes the internal
16  data structures in an easy-to-parse way for testing. Most people will know
17  these as *lit tests* in LLVM, although lit is the test driver and ShellTest
18  is the test format that uses ``RUN:`` lines. `FileCheck
19  <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify
20  the output.
21* **API tests**: Integration tests that interact with the debugger through the
22  SB API. These are written in Python and use LLDB's ``dotest.py`` testing
23  framework on top of Python's `unittest2
24  <https://docs.python.org/2/library/unittest.html>`_.
25
26All three test suites use ``lit`` (`LLVM Integrated Tester
27<https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test
28suites can be run as a whole or separately.
29
30
31Unit Tests
32``````````
33
34Unit tests are located under ``lldb/unittests``. If it's possible to test
35something in isolation or as a single unit, you should make it a unit test.
36
37Often you need instances of the core objects such as a debugger, target or
38process, in order to test something meaningful. We already have a handful of
39tests that have the necessary boiler plate, but this is something we could
40abstract away and make it more user friendly.
41
42Shell Tests
43```````````
44
45Shell tests are located under ``lldb/test/Shell``. These tests are generally
46built around checking the output of ``lldb`` (the command line driver) or
47``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
48write because they require little boilerplate.
49
50``lldb-test`` is a relatively new addition to the test suite. It was the first
51tool that was added that is designed for testing. Since then it has been
52continuously extended with new subcommands, improving our test coverage. Among
53other things you can use it to query lldb for symbol files, for object files
54and breakpoints.
55
56Obviously shell tests are great for testing the command line driver itself or
57the subcomponents already exposed by lldb-test. But when it comes to LLDB's
58vast functionality, most things can be tested both through the driver as well
59as the Python API. For example, to test setting a breakpoint, you could do it
60from the command line driver with ``b main`` or you could use the SB API and do
61something like ``target.BreakpointCreateByName`` [#]_.
62
63A good rule of thumb is to prefer shell tests when what is being tested is
64relatively simple. Expressivity is limited compared to the API tests, which
65means that you have to have a well-defined test scenario that you can easily
66match with ``FileCheck``.
67
68Another thing to consider are the binaries being debugged, which we call
69inferiors. For shell tests, they have to be relatively simple. The
70``dotest.py`` test framework has extensive support for complex build scenarios
71and different variants, which is described in more detail below, while shell
72tests are limited to single lines of shell commands with compiler and linker
73invocations.
74
75On the same topic, another interesting aspect of the shell tests is that there
76you can often get away with a broken or incomplete binary, whereas the API
77tests almost always require a fully functional executable. This enables testing
78of (some) aspects of handling of binaries with non-native architectures or
79operating systems.
80
81Finally, the shell tests always run in batch mode. You start with some input
82and the test verifies the output. The debugger can be sensitive to its
83environment, such as the platform it runs on. It can be hard to express
84that the same test might behave slightly differently on macOS and Linux.
85Additionally, the debugger is an interactive tool, and the shell test provide
86no good way of testing those interactive aspects, such as tab completion for
87example.
88
89API Tests
90`````````
91
92API tests are located under ``lldb/test/API``. They are run with the
93``dotest.py``. Tests are written in Python and test binaries (inferiors) are
94compiled with Make. The majority of API tests are end-to-end tests that compile
95programs from source, run them, and debug the processes.
96
97As mentioned before, ``dotest.py`` is LLDB's testing framework. The
98implementation is located under ``lldb/packages/Python/lldbsuite``. We have
99several extensions and custom test primitives on top of what's offered by
100`unittest2 <https://docs.python.org/2/library/unittest.html>`_. Those can be
101found  in
102`lldbtest.py <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbtest.py>`_.
103
104Below is the directory layout of the `example API test
105<https://github.com/llvm/llvm-project/tree/main/lldb/test/API/sample_test>`_.
106The test directory will always contain a python file, starting with ``Test``.
107Most of the tests are structured as a binary being debugged, so there will be
108one or more source files and a ``Makefile``.
109
110::
111
112  sample_test
113  ├── Makefile
114  ├── TestSampleTest.py
115  └── main.c
116
117Let's start with the Python test file. Every test is its own class and can have
118one or more test methods, that start with ``test_``.  Many tests define
119multiple test methods and share a bunch of common code. For example, for a
120fictive test that makes sure we can set breakpoints we might have one test
121method that ensures we can set a breakpoint by address, on that sets a
122breakpoint by name and another that sets the same breakpoint by file and line
123number. The setup, teardown and everything else other than setting the
124breakpoint could be shared.
125
126Our testing framework also has a bunch of utilities that abstract common
127operations, such as creating targets, setting breakpoints etc. When code is
128shared across tests, we extract it into a utility in ``lldbutil``. It's always
129worth taking a look at  `lldbutil
130<https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbutil.py>`_
131to see if there's a utility to simplify some of the testing boiler plate.
132Because we can't always audit every existing test, this is doubly true when
133looking at an existing test for inspiration.
134
135It's possible to skip or `XFAIL
136<https://ftp.gnu.org/old-gnu/Manuals/dejagnu-1.3/html_node/dejagnu_6.html>`_
137tests using decorators. You'll see them a lot. The debugger can be sensitive to
138things like the architecture, the host and target platform, the compiler
139version etc. LLDB comes with a range of predefined decorators for these
140configurations.
141
142::
143
144  @expectedFailureAll(archs=["aarch64"], oslist=["linux"]
145
146Another great thing about these decorators is that they're very easy to extend,
147it's even possible to define a function in a test case that determines whether
148the test should be run or not.
149
150::
151
152  @expectedFailure(checking_function_name)
153
154In addition to providing a lot more flexibility when it comes to writing the
155test, the API test also allow for much more complex scenarios when it comes to
156building inferiors. Every test has its own ``Makefile``, most of them only a
157few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a
158thousand lines of rules takes care of most if not all of the boiler plate,
159while individual make files can be used to build more advanced tests.
160
161Here's an example of a simple ``Makefile`` used by the example test.
162
163::
164
165  C_SOURCES := main.c
166  CFLAGS_EXTRAS := -std=c99
167
168  include Makefile.rules
169
170Finding the right variables to set can be tricky. You can always take a look at
171`Makefile.rules <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/make/Makefile.rules>`_
172but often it's easier to find an existing ``Makefile`` that does something
173similar to what you want to do.
174
175Another thing this enables is having different variants for the same test
176case. By default, we run every test for two debug info formats, once with
177DWARF from the object files and another with a dSYM on macOS or split
178DWARF (DWO) on Linux. But there are many more things we can test
179that are orthogonal to the test itself. On GreenDragon we have a matrix bot
180that runs the test suite under different configurations, with older host
181compilers and different DWARF versions.
182
183As you can imagine, this quickly lead to combinatorial explosion in the number
184of variants. It's very tempting to add more variants because it's an easy way
185to increase test coverage. It doesn't scale. It's easy to set up, but increases
186the runtime of the tests and has a large ongoing cost.
187
188The test variants are most useful when developing a larger feature (e.g. support
189for a new DWARF version). The test suite contains a large number of fairly
190generic tests, so running the test suite with the feature enabled is a good way
191to gain confidence that you haven't missed an important aspect. However, this
192genericness makes them poor regression tests. Because it's not clear what a
193specific test covers, a random modification to the test case can make it start
194(or stop) testing a completely different part of your feature. And since these
195tests tend to look very similar, it's easy for a simple bug to cause hundreds of
196tests to fail in the same way.
197
198For this reason, we recommend using test variants only while developing a new
199feature. This can often be done by running the test suite with different
200arguments -- without any modifications to the code. You can create a focused
201test for any bug found that way. Often, there will be many tests failing, but a
202lot of then will have the same root cause.  These tests will be easier to debug
203and will not put undue burden on all other bots and developers.
204
205In conclusion, you'll want to opt for an API test to test the API itself or
206when you need the expressivity, either for the test case itself or for the
207program being debugged. The fact that the API tests work with different
208variants mean that more general tests should be API tests, so that they can be
209run against the different variants.
210
211Guidelines for API tests
212^^^^^^^^^^^^^^^^^^^^^^^^
213
214API tests are expected to be fast, reliable and maintainable. To achieve this
215goal, API tests should conform to the following guidelines in addition to normal
216good testing practices.
217
218**Don't unnecessarily launch the test executable.**
219    Launching a process and running to a breakpoint can often be the most
220    expensive part of a test and should be avoided if possible. A large part
221    of LLDB's functionality is available directly after creating an `SBTarget`
222    of the test executable.
223
224    The part of the SB API that can be tested with just a target includes
225    everything that represents information about the executable and its
226    debug information (e.g., `SBTarget`, `SBModule`, `SBSymbolContext`,
227    `SBFunction`, `SBInstruction`, `SBCompileUnit`, etc.). For test executables
228    written in languages with a type system that is mostly defined at compile
229    time (e.g., C and C++) there is also usually no process necessary to test
230    the `SBType`-related parts of the API. With those languages it's also
231    possible to test `SBValue` by running expressions with
232    `SBTarget.EvaluateExpression` or the ``expect_expr`` testing utility.
233
234    Functionality that always requires a running process is everything that
235    tests the `SBProcess`, `SBThread`, and `SBFrame` classes. The same is true
236    for tests that exercise breakpoints, watchpoints and sanitizers.
237    Languages such as Objective-C that have a dependency on a runtime
238    environment also always require a running process.
239
240**Don't unnecessarily include system headers in test sources.**
241    Including external headers slows down the compilation of the test executable
242    and it makes reproducing test failures on other operating systems or
243    configurations harder.
244
245**Avoid specifying test-specific compiler flags when including system headers.**
246    If a test requires including a system header (e.g., a test for a libc++
247    formatter includes a libc++ header), try to avoid specifying custom compiler
248    flags if possible. Certain debug information formats such as ``gmodules``
249    use a cache that is shared between all API tests and that contains
250    precompiled system headers. If you add or remove a specific compiler flag
251    in your test (e.g., adding ``-DFOO`` to the ``Makefile`` or ``self.build``
252    arguments), then the test will not use the shared precompiled header cache
253    and expensively recompile all system headers from scratch. If you depend on
254    a specific compiler flag for the test, you can avoid this issue by either
255    removing all system header includes or decorating the test function with
256    ``@no_debug_info_test`` (which will avoid running all debug information
257    variants including ``gmodules``).
258
259**Test programs should be kept simple.**
260    Test executables should do the minimum amount of work to bring the process
261    into the state that is required for the test. Simulating a 'real' program
262    that actually tries to do some useful task rarely helps with catching bugs
263    and makes the test much harder to debug and maintain. The test programs
264    should always be deterministic (i.e., do not generate and check against
265    random test values).
266
267**Identifiers in tests should be simple and descriptive.**
268    Often test programs need to declare functions and classes which require
269    choosing some form of identifier for them. These identifiers should always
270    either be kept simple for small tests (e.g., ``A``, ``B``, ...) or have some
271    descriptive name (e.g., ``ClassWithTailPadding``, ``inlined_func``, ...).
272    Never choose identifiers that are already used anywhere else in LLVM or
273    other programs (e.g., don't name a class  ``VirtualFileSystem``, a function
274    ``llvm_unreachable``, or a namespace ``rapidxml``) as this will mislead
275    people ``grep``'ing the LLVM repository for those strings.
276
277**Prefer LLDB testing utilities over directly working with the SB API.**
278    The ``lldbutil`` module and the ``TestBase`` class come with a large amount
279    of utility functions that can do common test setup tasks (e.g., starting a
280    test executable and running the process to a breakpoint). Using these
281    functions not only keeps the test shorter and free of duplicated code, but
282    they also follow best test suite practices and usually give much clearer
283    error messages if something goes wrong. The test utilities also contain
284    custom asserts and checks that should be preferably used (e.g.
285    ``self.assertSuccess``).
286
287**Prefer calling the SB API over checking command output.**
288    Avoid writing your tests on top of ``self.expect(...)`` calls that check
289    the output of LLDB commands and instead try calling into the SB API. Relying
290    on LLDB commands makes changing (and improving) the output/syntax of
291    commands harder and the resulting tests are often prone to accepting
292    incorrect test results. Especially improved error messages that contain
293    more information might cause these ``self.expect`` calls to unintentionally
294    find the required ``substrs``. For example, the following ``self.expect``
295    check will unexpectedly pass if it's ran as the first expression in a test:
296
297::
298
299    self.expect("expr 2 + 2", substrs=["0"])
300
301When running the same command in LLDB the reason for the unexpected success
302is that '0' is found in the name of the implicitly created result variable:
303
304::
305
306    (lldb) expr 2 + 2
307    (int) $0 = 4
308           ^ The '0' substring is found here.
309
310A better way to write the test above would be using LLDB's testing function
311``expect_expr`` will only pass if the expression produces a value of 0:
312
313::
314
315    self.expect_expr("2 + 2", result_value="0")
316
317**Prefer using specific asserts over the generic assertTrue/assertFalse.**.
318    The ``self.assertTrue``/``self.assertFalse`` functions should always be your
319    last option as they give non-descriptive error messages. The test class has
320    several expressive asserts such as ``self.assertIn`` that automatically
321    generate an explanation how the received values differ from the expected
322    ones. Check the documentation of Python's ``unittest`` module to see what
323    asserts are available. LLDB also has a few custom asserts that are tailored
324    to our own data types.
325
326+-----------------------------------------------+-----------------------------------------------------------------+
327| **Assert**                                    | **Description**                                                 |
328+-----------------------------------------------+-----------------------------------------------------------------+
329| ``assertSuccess``                             | Assert that an ``lldb.SBError`` is in the "success" state.      |
330+-----------------------------------------------+-----------------------------------------------------------------+
331| ``assertState``                               | Assert that two states (``lldb.eState*``) are equal.            |
332+-----------------------------------------------+-----------------------------------------------------------------+
333| ``assertStopReason``                          | Assert that two stop reasons (``lldb.eStopReason*``) are equal. |
334+-----------------------------------------------+-----------------------------------------------------------------+
335
336    If you can't find a specific assert that fits your needs and you fall back
337    to a generic assert, make sure you put useful information into the assert's
338    ``msg`` argument that helps explain the failure.
339
340::
341
342    # Bad. Will print a generic error such as 'False is not True'.
343    self.assertTrue(expected_string in list_of_results)
344    # Good. Will print expected_string and the contents of list_of_results.
345    self.assertIn(expected_string, list_of_results)
346
347**Do not use hard-coded line numbers in your test case.**
348
349Instead, try to tag the line with some distinguishing pattern, and use the function line_number() defined in lldbtest.py which takes
350filename and string_to_match as arguments and returns the line number.
351
352As an example, take a look at test/API/functionalities/breakpoint/breakpoint_conditions/main.c which has these
353two lines:
354
355.. code-block:: c
356
357        return c(val); // Find the line number of c's parent call here.
358
359and
360
361.. code-block:: c
362
363    return val + 3; // Find the line number of function "c" here.
364
365The Python test case TestBreakpointConditions.py uses the comment strings to find the line numbers during setUp(self) and use them
366later on to verify that the correct breakpoint is being stopped on and that its parent frame also has the correct line number as
367intended through the breakpoint condition.
368
369**Take advantage of the unittest framework's decorator features.**
370
371These features can be use to properly mark your test class or method for platform-specific tests, compiler specific, version specific.
372
373As an example, take a look at test/API/lang/c/forward/TestForwardDeclaration.py which has these lines:
374
375.. code-block:: python
376
377    @no_debug_info_test
378    @skipIfDarwin
379    @skipIf(compiler=no_match("clang"))
380    @skipIf(compiler_version=["<", "8.0"])
381    @expectedFailureAll(oslist=["windows"])
382    def test_debug_names(self):
383        """Test that we are able to find complete types when using DWARF v5
384        accelerator tables"""
385        self.do_test(dict(CFLAGS_EXTRAS="-gdwarf-5 -gpubnames"))
386
387This tells the test harness that unless we are running "linux" and clang version equal & above 8.0, the test should be skipped.
388
389**Class-wise cleanup after yourself.**
390
391TestBase.tearDownClass(cls) provides a mechanism to invoke the platform-specific cleanup after finishing with a test class. A test
392class can have more than one test methods, so the tearDownClass(cls) method gets run after all the test methods have been executed by
393the test harness.
394
395The default cleanup action performed by the packages/Python/lldbsuite/test/lldbtest.py module invokes the "make clean" os command.
396
397If this default cleanup is not enough, individual class can provide an extra cleanup hook with a class method named classCleanup ,
398for example, in test/API/terminal/TestSTTYBeforeAndAfter.py:
399
400.. code-block:: python
401
402    @classmethod
403    def classCleanup(cls):
404        """Cleanup the test byproducts."""
405        cls.RemoveTempFile("child_send1.txt")
406
407
408The 'child_send1.txt' file gets generated during the test run, so it makes sense to explicitly spell out the action in the same
409TestSTTYBeforeAndAfter.py file to do the cleanup instead of artificially adding it as part of the default cleanup action which serves to
410cleanup those intermediate and a.out files.
411
412Running The Tests
413-----------------
414
415.. note::
416
417   On Windows any invocations of python should be replaced with python_d, the
418   debug interpreter, when running the test suite against a debug version of
419   LLDB.
420
421.. note::
422
423   On NetBSD you must export ``LD_LIBRARY_PATH=$PWD/lib`` in your environment.
424   This is due to lack of the ``$ORIGIN`` linker feature.
425
426Running the Full Test Suite
427```````````````````````````
428
429The easiest way to run the LLDB test suite is to use the ``check-lldb`` build
430target.
431
432By default, the ``check-lldb`` target builds the test programs with the same
433compiler that was used to build LLDB. To build the tests with a different
434compiler, you can set the ``LLDB_TEST_COMPILER`` CMake variable.
435
436It is possible to customize the architecture of the test binaries and compiler
437used by appending ``-A`` and ``-C`` options respectively to the CMake variable
438``LLDB_TEST_USER_ARGS``. For example, to test LLDB against 32-bit binaries
439built with a custom version of clang, do:
440
441::
442
443   $ cmake -DLLDB_TEST_USER_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja
444   $ ninja check-lldb
445
446Note that multiple ``-A`` and ``-C`` flags can be specified to
447``LLDB_TEST_USER_ARGS``.
448
449Running a Single Test Suite
450```````````````````````````
451
452Each test suite can be run separately, similar to running the whole test suite
453with ``check-lldb``.
454
455* Use ``check-lldb-unit`` to run just the unit tests.
456* Use ``check-lldb-api`` to run just the SB API tests.
457* Use ``check-lldb-shell`` to run just the shell tests.
458
459You can run specific subdirectories by appending the directory name to the
460target. For example, to run all the tests in ``ObjectFile``, you can use the
461target ``check-lldb-shell-objectfile``. However, because the unit tests and API
462tests don't actually live under ``lldb/test``, this convenience is only
463available for the shell tests.
464
465Running a Single Test
466`````````````````````
467
468The recommended way to run a single test is by invoking the lit driver with a
469filter. This ensures that the test is run with the same configuration as when
470run as part of a test suite.
471
472::
473
474   $ ./bin/llvm-lit -sv tools/lldb/test --filter <test>
475
476
477Because lit automatically scans a directory for tests, it's also possible to
478pass a subdirectory to run a specific subset of the tests.
479
480::
481
482   $ ./bin/llvm-lit -sv tools/lldb/test/Shell/Commands/CommandScriptImmediateOutput
483
484
485For the SB API tests it is possible to forward arguments to ``dotest.py`` by
486passing ``--param`` to lit and setting a value for ``dotest-args``.
487
488::
489
490   $ ./bin/llvm-lit -sv tools/lldb/test --param dotest-args='-C gcc'
491
492
493Below is an overview of running individual test in the unit and API test suites
494without going through the lit driver.
495
496Running a Specific Test or Set of Tests: API Tests
497``````````````````````````````````````````````````
498
499In addition to running all the LLDB test suites with the ``check-lldb`` CMake
500target above, it is possible to run individual LLDB tests. If you have a CMake
501build you can use the ``lldb-dotest`` binary, which is a wrapper around
502``dotest.py`` that passes all the arguments configured by CMake.
503
504Alternatively, you can use ``dotest.py`` directly, if you want to run a test
505one-off with a different configuration.
506
507For example, to run the test cases defined in TestInferiorCrashing.py, run:
508
509::
510
511   $ ./bin/lldb-dotest -p TestInferiorCrashing.py
512
513::
514
515   $ cd $lldb/test
516   $ python dotest.py --executable <path-to-lldb> -p TestInferiorCrashing.py ../packages/Python/lldbsuite/test
517
518If the test is not specified by name (e.g. if you leave the ``-p`` argument
519off),  all tests in that directory will be executed:
520
521
522::
523
524   $ ./bin/lldb-dotest functionalities/data-formatter
525
526::
527
528   $ python dotest.py --executable <path-to-lldb> functionalities/data-formatter
529
530Many more options that are available. To see a list of all of them, run:
531
532::
533
534   $ python dotest.py -h
535
536
537Running a Specific Test or Set of Tests: Unit Tests
538```````````````````````````````````````````````````
539
540The unit tests are simple executables, located in the build directory under ``tools/lldb/unittests``.
541
542To run them, just run the test binary, for example, to run all the Host tests:
543
544::
545
546   $ ./tools/lldb/unittests/Host/HostTests
547
548
549To run a specific test, pass a filter, for example:
550
551::
552
553   $ ./tools/lldb/unittests/Host/HostTests --gtest_filter=SocketTest.DomainListenConnectAccept
554
555
556Running the Test Suite Remotely
557```````````````````````````````
558
559Running the test-suite remotely is similar to the process of running a local
560test suite, but there are two things to have in mind:
561
5621. You must have the lldb-server running on the remote system, ready to accept
563   multiple connections. For more information on how to setup remote debugging
564   see the Remote debugging page.
5652. You must tell the test-suite how to connect to the remote system. This is
566   achieved using the ``--platform-name``, ``--platform-url`` and
567   ``--platform-working-dir`` parameters to ``dotest.py``. These parameters
568   correspond to the platform select and platform connect LLDB commands. You
569   will usually also need to specify the compiler and architecture for the
570   remote system.
571
572Currently, running the remote test suite is supported only with ``dotest.py`` (or
573dosep.py with a single thread), but we expect this issue to be addressed in the
574near future.
575
576Running tests in QEMU System Emulation Environment
577``````````````````````````````````````````````````
578
579QEMU can be used to test LLDB in an emulation environment in the absence of
580actual hardware. `QEMU based testing <https://lldb.llvm.org/use/qemu-testing.html>`_
581page describes how to setup an emulation environment using QEMU helper scripts
582found under llvm-project/lldb/scripts/lldb-test-qemu. These scripts currently
583work with Arm or AArch64, but support for other architectures can be added easily.
584
585Debugging Test Failures
586-----------------------
587
588On non-Windows platforms, you can use the ``-d`` option to ``dotest.py`` which
589will cause the script to print out the pid of the test and wait for a while
590until a debugger is attached. Then run ``lldb -p <pid>`` to attach.
591
592To instead debug a test's python source, edit the test and insert
593``import pdb; pdb.set_trace()`` at the point you want to start debugging. In
594addition to pdb's debugging facilities, lldb commands can be executed with the
595help of a pdb alias. For example ``lldb bt`` and ``lldb v some_var``. Add this
596line to your ``~/.pdbrc``:
597
598::
599
600   alias lldb self.dbg.HandleCommand("%*")
601
602Debugging Test Failures on Windows
603``````````````````````````````````
604
605On Windows, it is strongly recommended to use Python Tools for Visual Studio
606for debugging test failures. It can seamlessly step between native and managed
607code, which is very helpful when you need to step through the test itself, and
608then into the LLDB code that backs the operations the test is performing.
609
610A quick guide to getting started with PTVS is as follows:
611
612#. Install PTVS
613#. Create a Visual Studio Project for the Python code.
614    #. Go to File -> New -> Project -> Python -> From Existing Python Code.
615    #. Choose llvm/tools/lldb as the directory containing the Python code.
616    #. When asked where to save the .pyproj file, choose the folder ``llvm/tools/lldb/pyproj``. This is a special folder that is ignored by the ``.gitignore`` file, since it is not checked in.
617#. Set test/dotest.py as the startup file
618#. Make sure there is a Python Environment installed for your distribution. For example, if you installed Python to ``C:\Python35``, PTVS needs to know that this is the interpreter you want to use for running the test suite.
619    #. Go to Tools -> Options -> Python Tools -> Environment Options
620    #. Click Add Environment, and enter Python 3.5 Debug for the name. Fill out the values correctly.
621#. Configure the project to use this debug interpreter.
622    #. Right click the Project node in Solution Explorer.
623    #. In the General tab, Make sure Python 3.5 Debug is the selected Interpreter.
624    #. In Debug/Search Paths, enter the path to your ninja/lib/site-packages directory.
625    #. In Debug/Environment Variables, enter ``VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\``.
626    #. If you want to enabled mixed mode debugging, check Enable native code debugging (this slows down debugging, so enable it only on an as-needed basis.)
627#. Set the command line for the test suite to run.
628    #. Right click the project in solution explorer and choose the Debug tab.
629    #. Enter the arguments to dotest.py.
630    #. Example command options:
631
632::
633
634   --arch=i686
635   # Path to debug lldb.exe
636   --executable D:/src/llvmbuild/ninja/bin/lldb.exe
637   # Directory to store log files
638   -s D:/src/llvmbuild/ninja/lldb-test-traces
639   -u CXXFLAGS -u CFLAGS
640   # If a test crashes, show JIT debugging dialog.
641   --enable-crash-dialog
642   # Path to release clang.exe
643   -C d:\src\llvmbuild\ninja_release\bin\clang.exe
644   # Path to the particular test you want to debug.
645   -p TestPaths.py
646   # Root of test tree
647   D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test
648
649::
650
651   --arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess
652
653.. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_
654