1.. _openmp_runtimes: 2 3LLVM/OpenMP Runtimes 4==================== 5 6There are four distinct types of LLVM/OpenMP runtimes: the host runtime 7:ref:`libomp`, the target offloading runtime :ref:`libomptarget`, the target 8offloading plugin :ref:`libomptarget_plugin`, and finally the target device 9runtime :ref:`libomptarget_device`. 10 11For general information on debugging OpenMP target offloading applications, see 12:ref:`libomptarget_info` and :ref:`libomptarget_device_debugging` 13 14.. _libomp: 15 16LLVM/OpenMP Host Runtime (``libomp``) 17------------------------------------- 18 19An `early (2015) design document 20<https://raw.githubusercontent.com/llvm/llvm-project/main/openmp/runtime/doc/Reference.pdf>`_ 21for the LLVM/OpenMP host runtime, aka. `libomp.so`, is available as a `pdf 22<https://raw.githubusercontent.com/llvm/llvm-project/main/openmp/runtime/doc/Reference.pdf>`_. 23 24.. _libomp_environment_vars: 25 26Environment Variables 27^^^^^^^^^^^^^^^^^^^^^ 28 29OMP_CANCELLATION 30"""""""""""""""" 31 32Enables cancellation of the innermost enclosing region of the type specified. 33If set to ``true``, the effects of the cancel construct and of cancellation 34points are enabled and cancellation is activated. If set to ``false``, 35cancellation is disabled and the cancel construct and cancellation points are 36effectively ignored. 37 38.. note:: 39 Internal barrier code will work differently depending on whether cancellation 40 is enabled. Barrier code should repeatedly check the global flag to figure 41 out if cancellation has been triggered. If a thread observes cancellation, it 42 should leave the barrier prematurely with the return value 1 (and may wake up 43 other threads). Otherwise, it should leave the barrier with the return value 0. 44 45Enables (``true``) or disables (``false``) cancellation of the innermost 46enclosing region of the type specified. 47 48**Default:** ``false`` 49 50 51OMP_DISPLAY_ENV 52""""""""""""""" 53 54Enables (``true``) or disables (``false``) the printing to ``stderr`` of 55the OpenMP version number and the values associated with the OpenMP 56environment variables. 57 58Possible values are: ``true``, ``false``, or ``verbose``. 59 60**Default:** ``false`` 61 62OMP_DEFAULT_DEVICE 63"""""""""""""""""" 64 65Sets the device that will be used in a target region. The OpenMP routine 66``omp_set_default_device`` or a device clause in a parallel pragma can override 67this variable. If no device with the specified device number exists, the code is 68executed on the host. If this environment variable is not set, device number 0 69is used. 70 71OMP_DYNAMIC 72""""""""""" 73 74Enables (``true``) or disables (``false``) the dynamic adjustment of the 75number of threads. 76 77| **Default:** ``false`` 78 79OMP_MAX_ACTIVE_LEVELS 80""""""""""""""""""""" 81 82The maximum number of levels of parallel nesting for the program. 83 84| **Default:** ``1`` 85 86OMP_NESTED 87"""""""""" 88 89.. warning:: 90 Deprecated. Please use ``OMP_MAX_ACTIVE_LEVELS`` to control nested parallelism 91 92Enables (``true``) or disables (``false``) nested parallelism. 93 94| **Default:** ``false`` 95 96OMP_NUM_THREADS 97""""""""""""""" 98 99Sets the maximum number of threads to use for OpenMP parallel regions if no 100other value is specified in the application. 101 102The value can be a single integer, in which case it specifies the number of threads 103for all parallel regions. The value can also be a comma-separated list of integers, 104in which case each integer specifies the number of threads for a parallel 105region at that particular nesting level. 106 107The first position in the list represents the outer-most parallel nesting level, 108the second position represents the next-inner parallel nesting level, and so on. 109At any level, the integer can be left out of the list. If the first integer in a 110list is left out, it implies the normal default value for threads is used at the 111outer-most level. If the integer is left out of any other level, the number of 112threads for that level is inherited from the previous level. 113 114| **Default:** The number of processors visible to the operating system on which the program is executed. 115| **Syntax:** ``OMP_NUM_THREADS=value[,value]*`` 116| **Example:** ``OMP_NUM_THREADS=4,3`` 117 118OMP_PLACES 119"""""""""" 120 121Specifies an explicit ordered list of places, either as an abstract name 122describing a set of places or as an explicit list of places described by 123non-negative numbers. An exclusion operator, ``!``, can also be used to exclude 124the number or place immediately following the operator. 125 126For **explicit lists**, an ordered list of places is specified with each place 127represented as a set of non-negative numbers. The non-negative numbers represent 128operating system logical processor numbers and can be thought of as an OS affinity mask. 129 130Individual places can be specified through two methods. 131Both the **examples** below represent the same place. 132 133* An explicit list of comma-separated non-negatives numbers **Example:** ``{0,2,4,6}`` 134* An interval with notation ``<lower-bound>:<length>[:<stride>]``. **Example:** ``{0:4:2}``. When ``<stride>`` is omitted, a unit stride is assumed. 135 The interval notation represents this set of numbers: 136 137:: 138 139 <lower-bound>, <lower-bound> + <stride>, ..., <lower-bound> + (<length> - 1) * <stride> 140 141 142A place list can also be specified using the same interval 143notation: ``{place}:<length>[:<stride>]``. 144This represents the list of length ``<length>`` places determined by the following: 145 146.. code-block:: c 147 148 {place}, {place} + <stride>, ..., {place} + (<length>-1)*<stride> 149 Where given {place} and integer N, {place} + N = {place with every number offset by N} 150 Example: {0,3,6}:4:1 represents {0,3,6}, {1,4,7}, {2,5,8}, {3,6,9} 151 152**Examples of explicit lists:** 153These all represent the same set of places 154 155:: 156 157 OMP_PLACES="{0,1,2,3},{4,5,6,7},{8,9,10,11},{12,13,14,15}" 158 OMP_PLACES="{0:4},{4:4},{8:4},{12:4}" 159 OMP_PLACES="{0:4}:4:4" 160 161.. note:: 162 When specifying a place using a set of numbers, if any number cannot be 163 mapped to a processor on the target platform, then that number is 164 ignored within the place, but the rest of the place is kept intact. 165 If all numbers within a place are invalid, then the entire place is removed 166 from the place list, but the rest of place list is kept intact. 167 168The **abstract names** listed below are understood by the run-time environment: 169 170* ``threads:`` Each place corresponds to a single hardware thread. 171* ``cores:`` Each place corresponds to a single core (having one or more hardware threads). 172* ``sockets:`` Each place corresponds to a single socket (consisting of one or more cores). 173* ``numa_domains:`` Each place corresponds to a single NUMA domain (consisting of one or more cores). 174* ``ll_caches:`` Each place corresponds to a last-level cache (consisting of one or more cores). 175 176The abstract name may be appended by a positive number in parentheses to 177denote the length of the place list to be created, that is ``abstract_name(num-places)``. 178If the optional number isn't specified, then the runtime will use all available 179resources of type ``abstract_name``. When requesting fewer places than available 180on the system, the first available resources as determined by ``abstract_name`` 181are used. When requesting more places than available on the system, only the 182available resources are used. 183 184**Examples of abstract names:** 185:: 186 187 OMP_PLACES=threads 188 OMP_PLACES=threads(4) 189 190OMP_PROC_BIND (Windows, Linux) 191"""""""""""""""""""""""""""""" 192Sets the thread affinity policy to be used for parallel regions at the 193corresponding nested level. Enables (``true``) or disables (``false``) 194the binding of threads to processor contexts. If enabled, this is the 195same as specifying ``KMP_AFFINITY=scatter``. If disabled, this is the 196same as specifying ``KMP_AFFINITY=none``. 197 198**Acceptable values:** ``true``, ``false``, or a comma separated list, each 199element of which is one of the following values: ``master``, ``close``, ``spread``, or ``primary``. 200 201**Default:** ``false`` 202 203.. warning:: 204 ``master`` is deprecated. The semantics of ``master`` are the same as ``primary``. 205 206If set to ``false``, the execution environment may move OpenMP threads between 207OpenMP places, thread affinity is disabled, and ``proc_bind`` clauses on 208parallel constructs are ignored. Otherwise, the execution environment should 209not move OpenMP threads between OpenMP places, thread affinity is enabled, and 210the initial thread is bound to the first place in the OpenMP place list. 211 212If set to ``primary``, all threads are bound to the same place as the primary 213thread. 214 215If set to ``close``, threads are bound to successive places, near where the 216primary thread is bound. 217 218If set to ``spread``, the primary thread's partition is subdivided and threads 219are bound to single place successive sub-partitions. 220 221| **Related environment variables:** ``KMP_AFFINITY`` (overrides ``OMP_PROC_BIND``). 222 223OMP_SCHEDULE 224"""""""""""" 225Sets the run-time schedule type and an optional chunk size. 226 227| **Default:** ``static``, no chunk size specified 228| **Syntax:** ``OMP_SCHEDULE="kind[,chunk_size]"`` 229 230OMP_STACKSIZE 231""""""""""""" 232 233Sets the number of bytes to allocate for each OpenMP thread to use as the 234private stack for the thread. Recommended size is 16M. 235 236Use the optional suffixes to specify byte units: ``B`` (bytes), ``K`` (Kilobytes), 237``M`` (Megabytes), ``G`` (Gigabytes), or ``T`` (Terabytes) to specify the units. 238If you specify a value without a suffix, the byte unit 239is assumed to be ``K`` (Kilobytes). 240 241This variable does not affect the native operating system threads created by the 242user program, or the thread executing the sequential part of an OpenMP program. 243 244The ``kmp_{set,get}_stacksize_s()`` routines set/retrieve the value. 245The ``kmp_set_stacksize_s()`` routine must be called from sequential part, before 246first parallel region is created. Otherwise, calling ``kmp_set_stacksize_s()`` 247has no effect. 248 249| **Default:** 250 251* 32-bit architecture: ``2M`` 252* 64-bit architecture: ``4M`` 253 254| **Related environment variables:** ``KMP_STACKSIZE`` (overrides ``OMP_STACKSIZE``). 255| **Example:** ``OMP_STACKSIZE=8M`` 256 257OMP_THREAD_LIMIT 258"""""""""""""""" 259 260Limits the number of simultaneously-executing threads in an OpenMP program. 261 262If this limit is reached and another native operating system thread encounters 263OpenMP API calls or constructs, the program can abort with an error message. 264If this limit is reached when an OpenMP parallel region begins, a one-time 265warning message might be generated indicating that the number of threads in 266the team was reduced, but the program will continue. 267 268The ``omp_get_thread_limit()`` routine returns the value of the limit. 269 270| **Default:** No enforced limit 271| **Related environment variable:** ``KMP_ALL_THREADS`` (overrides ``OMP_THREAD_LIMIT``). 272 273OMP_WAIT_POLICY 274""""""""""""""" 275 276Decides whether threads spin (active) or yield (passive) while they are waiting. 277``OMP_WAIT_POLICY=active`` is an alias for ``KMP_LIBRARY=turnaround``, and 278``OMP_WAIT_POLICY=passive`` is an alias for ``KMP_LIBRARY=throughput``. 279 280| **Default:** ``passive`` 281 282.. note:: 283 Although the default is ``passive``, unless the user has explicitly set 284 ``OMP_WAIT_POLICY``, there is a small period of active spinning determined 285 by ``KMP_BLOCKTIME``. 286 287KMP_AFFINITY (Windows, Linux) 288""""""""""""""""""""""""""""" 289 290Enables run-time library to bind threads to physical processing units. 291 292You must set this environment variable before the first parallel region, or 293certain API calls including ``omp_get_max_threads()``, ``omp_get_num_procs()`` 294and any affinity API calls. 295 296**Syntax:** ``KMP_AFFINITY=[<modifier>,...]<type>[,<permute>][,<offset>]`` 297 298``modifiers`` are optional strings consisting of a keyword and possibly a specifier 299 300* ``respect`` (default) and ``norespect`` - determine whether to respect the original process affinity mask. 301* ``verbose`` and ``noverbose`` (default) - determine whether to display affinity information. 302* ``warnings`` (default) and ``nowarnings`` - determine whether to display warnings during affinity detection. 303* ``reset`` and ``noreset`` (default) - determine whether to reset primary thread's affinity after outermost parallel region(s) 304* ``granularity=<specifier>`` - takes the following specifiers ``thread``, ``core`` (default), ``tile``, 305 ``socket``, ``die``, ``group`` (Windows only). 306 The granularity describes the lowest topology levels that OpenMP threads are allowed to float within a topology map. 307 For example, if ``granularity=core``, then the OpenMP threads will be allowed to move between logical processors within 308 a single core. If ``granularity=thread``, then the OpenMP threads will be restricted to a single logical processor. 309* ``proclist=[<proc_list>]`` - The ``proc_list`` is specified by 310 311+--------------------+----------------------------------------+ 312| Value | Description | 313+====================+========================================+ 314| <proc_list> := | <proc_id> | { <id_list> } | 315+--------------------+----------------------------------------+ 316| <id_list> := | <proc_id> | <proc_id>,<id_list> | 317+--------------------+----------------------------------------+ 318 319Where each ``proc_id`` represents an operating system logical processor ID. 320For example, ``proclist=[3,0,{1,2},{0,3}]`` with ``OMP_NUM_THREADS=4`` would place thread 0 on 321OS logical processor 3, thread 1 on OS logical processor 0, thread 2 on both OS logical 322processors 1 & 2, and thread 3 on OS logical processors 0 & 3. 323 324``type`` is the thread affinity policy to choose. 325Valid choices are ``none``, ``balanced``, ``compact``, ``scatter``, ``explicit``, ``disabled`` 326 327* type ``none`` (default) - Does not bind OpenMP threads to particular thread contexts; 328 however, if the operating system supports affinity, the compiler still uses the 329 OpenMP thread affinity interface to determine machine topology. 330 Specify ``KMP_AFFINITY=verbose,none`` to list a machine topology map. 331* type ``compact`` - Specifying compact assigns the OpenMP thread <n>+1 to a free thread 332 context as close as possible to the thread context where the <n> OpenMP thread was 333 placed. For example, in a topology map, the nearer a node is to the root, the more 334 significance the node has when sorting the threads. 335* type ``scatter`` - Specifying scatter distributes the threads as evenly as 336 possible across the entire system. ``scatter`` is the opposite of ``compact``; so the 337 leaves of the node are most significant when sorting through the machine topology map. 338* type ``balanced`` - Places threads on separate cores until all cores have at least one thread, 339 similar to the ``scatter`` type. However, when the runtime must use multiple hardware thread 340 contexts on the same core, the balanced type ensures that the OpenMP thread numbers are close 341 to each other, which scatter does not do. This affinity type is supported on the CPU only for 342 single socket systems. 343* type ``explicit`` - Specifying explicit assigns OpenMP threads to a list of OS proc IDs that 344 have been explicitly specified by using the ``proclist`` modifier, which is required 345 for this affinity type. 346* type ``disabled`` - Specifying disabled completely disables the thread affinity interfaces. 347 This forces the OpenMP run-time library to behave as if the affinity interface was not 348 supported by the operating system. This includes the low-level API interfaces such 349 as ``kmp_set_affinity`` and ``kmp_get_affinity``, which have no effect and will return 350 a nonzero error code. 351 352For both ``compact`` and ``scatter``, ``permute`` and ``offset`` are allowed; 353however, if you specify only one integer, the runtime interprets the value as 354a permute specifier. **Both permute and offset default to 0.** 355 356The ``permute`` specifier controls which levels are most significant when sorting 357the machine topology map. A value for ``permute`` forces the mappings to make the 358specified number of most significant levels of the sort the least significant, 359and it inverts the order of significance. The root node of the tree is not 360considered a separate level for the sort operations. 361 362The ``offset`` specifier indicates the starting position for thread assignment. 363 364| **Default:** ``noverbose,warnings,respect,granularity=core,none`` 365| **Related environment variable:** ``OMP_PROC_BIND`` (``KMP_AFFINITY`` takes precedence) 366 367.. note:: 368 On Windows with multiple processor groups, the norespect affinity modifier 369 is assumed when the process affinity mask equals a single processor group 370 (which is default on Windows). Otherwise, the respect affinity modifier is used. 371 372.. note:: 373 On Windows with multiple processor groups, if the granularity is too coarse, it 374 will be set to ``granularity=group``. For example, if two processor groups exist 375 across one socket, and ``granularity=socket`` the runtime will shift the 376 granularity down to group since that is the largest granularity allowed by the OS. 377 378KMP_HIDDEN_HELPER_AFFINITY (Windows, Linux) 379""""""""""""""""""""""""""""""""""""""""""" 380 381Enables run-time library to bind hidden helper threads to physical processing units. 382This environment variable has the same syntax and semantics as ``KMP_AFFINIY`` but only 383applies to the hidden helper team. 384 385You must set this environment variable before the first parallel region, or 386certain API calls including ``omp_get_max_threads()``, ``omp_get_num_procs()`` 387and any affinity API calls. 388 389**Syntax:** Same as ``KMP_AFFINITY`` 390 391The following ``modifiers`` are ignored in ``KMP_HIDDEN_HELPER_AFFINITY`` and are only valid 392for ``KMP_AFFINITY``: 393* ``respect`` and ``norespect`` 394* ``reset`` and ``noreset`` 395 396KMP_ALL_THREADS 397""""""""""""""" 398 399Limits the number of simultaneously-executing threads in an OpenMP program. 400If this limit is reached and another native operating system thread encounters 401OpenMP API calls or constructs, then the program may abort with an error 402message. If this limit is reached at the time an OpenMP parallel region begins, 403a one-time warning message may be generated indicating that the number of 404threads in the team was reduced, but the program will continue execution. 405 406| **Default:** No enforced limit. 407| **Related environment variable:** ``OMP_THREAD_LIMIT`` (``KMP_ALL_THREADS`` takes precedence) 408 409KMP_BLOCKTIME 410""""""""""""" 411 412Sets the time that a thread should wait, after completing the 413execution of a parallel region, before sleeping. 414 415Use the optional suffixes: ``ms`` (milliseconds), or ``us`` (microseconds) to 416specify/change the units. Defaults units is milliseconds. 417 418Specify ``infinite`` for an unlimited wait time. 419 420| **Default:** 200 milliseconds 421| **Related Environment Variable:** ``KMP_LIBRARY`` 422| **Example:** ``KMP_BLOCKTIME=1ms`` 423 424KMP_CPUINFO_FILE 425"""""""""""""""" 426 427Specifies an alternate file name for a file containing the machine topology 428description. The file must be in the same format as :file:`/proc/cpuinfo`. 429 430**Default:** None 431 432KMP_DETERMINISTIC_REDUCTION 433""""""""""""""""""""""""""" 434 435Enables (``true``) or disables (``false``) the use of a specific ordering of 436the reduction operations for implementing the reduction clause for an OpenMP 437parallel region. This has the effect that, for a given number of threads, in 438a given parallel region, for a given data set and reduction operation, a 439floating point reduction done for an OpenMP reduction clause has a consistent 440floating point result from run to run, since round-off errors are identical. 441 442| **Default:** ``false`` 443| **Example:** ``KMP_DETERMINISTIC_REDUCTION=true`` 444 445KMP_DYNAMIC_MODE 446"""""""""""""""" 447 448Selects the method used to determine the number of threads to use for a parallel 449region when ``OMP_DYNAMIC=true``. Possible values: (``load_balance`` | ``thread_limit``), where, 450 451* ``load_balance``: tries to avoid using more threads than available execution units on the machine; 452* ``thread_limit``: tries to avoid using more threads than total execution units on the machine. 453 454**Default:** ``load_balance`` (on all supported platforms) 455 456KMP_HOT_TEAMS_MAX_LEVEL 457""""""""""""""""""""""" 458Sets the maximum nested level to which teams of threads will be hot. 459 460.. note:: 461 A hot team is a team of threads optimized for faster reuse by subsequent 462 parallel regions. In a hot team, threads are kept ready for execution of 463 the next parallel region, in contrast to the cold team, which is freed 464 after each parallel region, with its threads going into a common pool 465 of threads. 466 467For values of 2 and above, nested parallelism should be enabled. 468 469**Default:** 1 470 471KMP_HOT_TEAMS_MODE 472"""""""""""""""""" 473 474Specifies the run-time behavior when the number of threads in a hot team is reduced. 475Possible values: 476 477* ``0`` - Extra threads are freed and put into a common pool of threads. 478* ``1`` - Extra threads are kept in the team in reserve, for faster reuse 479 in subsequent parallel regions. 480 481**Default:** 0 482 483KMP_HW_SUBSET 484""""""""""""" 485 486Specifies the subset of available hardware resources for the hardware topology 487hierarchy. The subset is specified in terms of number of units per upper layer 488unit starting from top layer downwards. E.g. the number of sockets (top layer 489units), cores per socket, and the threads per core, to use with an OpenMP 490application, as an alternative to writing complicated explicit affinity settings 491or a limiting process affinity mask. You can also specify an offset value to set 492which resources to use. When available, you can specify attributes to select 493different subsets of resources. 494 495An extended syntax is available when ``KMP_TOPOLOGY_METHOD=hwloc``. Depending on what 496resources are detected, you may be able to specify additional resources, such as 497NUMA domains and groups of hardware resources that share certain cache levels. 498 499**Basic syntax:** ``[:][num_units|*]ID[@offset][:attribute] [,[num_units|*]ID[@offset][:attribute]...]`` 500 501An optional colon (:) can be specified at the beginning of the syntax to specify an explicit hardware subset. The default is an implicit hardware subset. 502 503Supported unit IDs are not case-insensitive. 504 505| ``S`` - socket 506| ``num_units`` specifies the requested number of sockets. 507 508| ``D`` - die 509| ``num_units`` specifies the requested number of dies per socket. 510 511| ``C`` - core 512| ``num_units`` specifies the requested number of cores per die - if any - otherwise, per socket. 513 514| ``T`` - thread 515| ``num_units`` specifies the requested number of HW threads per core. 516 517.. note:: 518 ``num_units`` can be left out or explicitly specified as ``*`` instead of a positive integer 519 meaning use all specified resources at that level. 520 e.g., ``1s,*c`` means use 1 socket and all the cores on that socket 521 522``offset`` - (Optional) The number of units to skip. 523 524``attribute`` - (Optional) An attribute differentiating resources at a particular level. The attributes available to users are: 525 526* **Core type** - On Intel architectures, this can be ``intel_atom`` or ``intel_core`` 527* **Core efficiency** - This is specified as ``eff``:emphasis:`num` where :emphasis:`num` is a number from 0 528 to the number of core efficiencies detected in the machine topology minus one. 529 E.g., ``eff0``. The greater the efficiency number the more performant the core. There may be 530 more core efficiencies than core types and can be viewed by setting ``KMP_AFFINITY=verbose`` 531 532.. note:: 533 The hardware cache can be specified as a unit, e.g. L2 for L2 cache, 534 or LL for last level cache. 535 536**Extended syntax when KMP_TOPOLOGY_METHOD=hwloc:** 537 538Additional IDs can be specified if detected. For example: 539 540``N`` - numa 541``num_units`` specifies the requested number of NUMA nodes per upper layer 542unit, e.g. per socket. 543 544``TI`` - tile 545num_units specifies the requested number of tiles to use per upper layer 546unit, e.g. per NUMA node. 547 548When any numa or tile units are specified in ``KMP_HW_SUBSET`` and the hwloc 549topology method is available, the ``KMP_TOPOLOGY_METHOD`` will be automatically 550set to hwloc, so there is no need to set it explicitly. 551 552For an **explicit hardware subset**, if one or more topology layers detected by the 553runtime are omitted from the subset, then those topology layers are ignored. 554Only explicitly specified topology layers are used in the subset. 555 556For an **implicit hardware subset**, it is implied that the socket, core, and thread 557topology types should be included in the subset. Other topology layers are not 558implicitly included and are ignored if they are not specified in the subset. 559Because the socket, core and thread topology types are always included in 560implicit hardware subsets, when they are omitted, it is assumed that all 561available resources of that type should be used. Implicit hardware subsets are 562the default. 563 564If you don't specify one or more types of resource, such as socket or thread, 565all available resources of that type are used. 566 567The run-time library prints a warning, and the setting of 568``KMP_HW_SUBSET`` is ignored if: 569 570* a resource is specified, but detection of that resource is not supported 571 by the chosen topology detection method and/or 572* a resource is specified twice. An exception to this condition is if attributes 573 differentiate the resource. 574* attributes are used when not detected in the machine topology or conflict with 575 each other. 576 577This variable does not work if ``KMP_AFFINITY=disabled``. 578 579**Default:** If omitted, the default value is to use all the 580available hardware resources. 581 582**Implicit Hardware Subset Examples:** 583 584* ``2s,4c,2t``: Use the first 2 sockets (s0 and s1), the first 4 cores on each 585 socket (c0 - c3), and 2 threads per core. 586* ``2s@2,4c@8,2t``: Skip the first 2 sockets (s0 and s1) and use 2 sockets 587 (s2-s3), skip the first 8 cores (c0-c7) and use 4 cores on each socket 588 (c8-c11), and use 2 threads per core. 589* ``5C@1,3T``: Use all available sockets, skip the first core and use 5 cores, 590 and use 3 threads per core. 591* ``1T``: Use all cores on all sockets, 1 thread per core. 592* ``1s, 1d, 1n, 1c, 1t``: Use 1 socket, 1 die, 1 NUMA node, 1 core, 1 thread 593 - use HW thread as a result. 594* ``4c:intel_atom,5c:intel_core``: Use all available sockets and use 4 595 Intel Atom(R) processor cores and 5 Intel(R) Core(TM) processor cores per socket. 596* ``2c:eff0@1,3c:eff1``: Use all available sockets, skip the first core with efficiency 0 597 and use the next 2 cores with efficiency 0 and 3 cores with efficiency 1 per socket. 598* ``1s, 1c, 1t``: Use 1 socket, 1 core, 1 thread. This may result in using 599 single thread on a 3-layer topology architecture, or multiple threads on 600 4-layer or 5-layer architecture. Result may even be different on the same 601 architecture, depending on ``KMP_TOPOLOGY_METHOD`` specified, as hwloc can 602 often detect more topology layers than the default method used by the OpenMP 603 run-time library. 604* ``*c:eff1@3``: Use all available sockets, skip the first three cores of 605 efficiency 1, and then use the rest of the available cores of efficiency 1. 606 607Explicit Hardware Subset Examples: 608 609* ``:2s,6t`` Use exactly the first two sockets and 6 threads per socket. 610* ``:1t@7`` Skip the first 7 threads (t0-t6) and use exactly one thread (t7). 611* ``:5c,1t`` Use exactly the first 5 cores (c0-c4) and the first thread on each core. 612 613To see the result of the setting, you can specify ``verbose`` modifier in 614``KMP_AFFINITY`` environment variable. The OpenMP run-time library will output 615to ``stderr`` the information about the discovered hardware topology before and 616after the ``KMP_HW_SUBSET`` setting was applied. 617 618KMP_INHERIT_FP_CONTROL 619"""""""""""""""""""""" 620 621Enables (``true``) or disables (``false``) the copying of the floating-point 622control settings of the primary thread to the floating-point control settings 623of the OpenMP worker threads at the start of each parallel region. 624 625**Default:** ``true`` 626 627KMP_LIBRARY 628""""""""""" 629 630Selects the OpenMP run-time library execution mode. The values for this variable 631are ``serial``, ``turnaround``, or ``throughput``. 632 633| **Default:** ``throughput`` 634| **Related environment variable:** ``KMP_BLOCKTIME`` and ``OMP_WAIT_POLICY`` 635 636KMP_SETTINGS 637"""""""""""" 638 639Enables (``true``) or disables (``false``) the printing of OpenMP run-time library 640environment variables during program execution. Two lists of variables are printed: 641user-defined environment variables settings and effective values of variables used 642by OpenMP run-time library. 643 644**Default:** ``false`` 645 646KMP_STACKSIZE 647""""""""""""" 648 649Sets the number of bytes to allocate for each OpenMP thread to use as its private stack. 650 651Recommended size is ``16M``. 652 653Use the optional suffixes to specify byte units: ``B`` (bytes), ``K`` (Kilobytes), 654``M`` (Megabytes), ``G`` (Gigabytes), or ``T`` (Terabytes) to specify the units. 655If you specify a value without a suffix, the byte unit is assumed to be K (Kilobytes). 656 657**Related environment variable:** ``KMP_STACKSIZE`` overrides ``GOMP_STACKSIZE``, which 658overrides ``OMP_STACKSIZE``. 659 660**Default:** 661 662* 32-bit architectures: ``2M`` 663* 64-bit architectures: ``4M`` 664 665KMP_TOPOLOGY_METHOD 666""""""""""""""""""" 667 668Forces OpenMP to use a particular machine topology modeling method. 669 670Possible values are: 671 672* ``all`` - Let OpenMP choose which topology method is most appropriate 673 based on the platform and possibly other environment variable settings. 674* ``cpuid_leaf31`` (x86 only) - Decodes the APIC identifiers as specified by leaf 31 of the 675 cpuid instruction. The runtime will produce an error if the machine does not support leaf 31. 676* ``cpuid_leaf11`` (x86 only) - Decodes the APIC identifiers as specified by leaf 11 of the 677 cpuid instruction. The runtime will produce an error if the machine does not support leaf 11. 678* ``cpuid_leaf4`` (x86 only) - Decodes the APIC identifiers as specified in leaf 4 679 of the cpuid instruction. The runtime will produce an error if the machine does not support leaf 4. 680* ``cpuinfo`` - If ``KMP_CPUINFO_FILE`` is not specified, forces OpenMP to 681 parse :file:`/proc/cpuinfo` to determine the topology (Linux only). 682 If ``KMP_CPUINFO_FILE`` is specified as described above, uses it (Windows or Linux). 683* ``group`` - Models the machine as a 2-level map, with level 0 specifying the 684 different processors in a group, and level 1 specifying the different 685 groups (Windows 64-bit only). 686 687.. note:: 688 Support for group is now deprecated and will be removed in a future release. Use all instead. 689 690* ``flat`` - Models the machine as a flat (linear) list of processors. 691* ``hwloc`` - Models the machine as the Portable Hardware Locality (hwloc) library does. 692 This model is the most detailed and includes, but is not limited to: numa domains, 693 packages, cores, hardware threads, caches, and Windows processor groups. This method is 694 only available if you have configured libomp to use hwloc during CMake configuration. 695 696**Default:** all 697 698KMP_VERSION 699""""""""""" 700 701Enables (``true``) or disables (``false``) the printing of OpenMP run-time 702library version information during program execution. 703 704**Default:** ``false`` 705 706KMP_WARNINGS 707"""""""""""" 708 709Enables (``true``) or disables (``false``) displaying warnings from the 710OpenMP run-time library during program execution. 711 712**Default:** ``true`` 713 714.. _libomptarget: 715 716LLVM/OpenMP Target Host Runtime (``libomptarget``) 717-------------------------------------------------- 718 719.. _libopenmptarget_environment_vars: 720 721Environment Variables 722^^^^^^^^^^^^^^^^^^^^^ 723 724``libomptarget`` uses environment variables to control different features of the 725library at runtime. This allows the user to obtain useful runtime information as 726well as enable or disable certain features. A full list of supported environment 727variables is defined below. 728 729 * ``LIBOMPTARGET_DEBUG=<Num>`` 730 * ``LIBOMPTARGET_PROFILE=<Filename>`` 731 * ``LIBOMPTARGET_PROFILE_GRANULARITY=<Num> (default 500, in us)`` 732 * ``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD=<Num>`` 733 * ``LIBOMPTARGET_INFO=<Num>`` 734 * ``LIBOMPTARGET_HEAP_SIZE=<Num>`` 735 * ``LIBOMPTARGET_STACK_SIZE=<Num>`` 736 * ``LIBOMPTARGET_SHARED_MEMORY_SIZE=<Num>`` 737 * ``LIBOMPTARGET_MAP_FORCE_ATOMIC=[TRUE/FALSE] (default TRUE)`` 738 * ``LIBOMPTARGET_JIT_OPT_LEVEL={0,1,2,3} (default 3)`` 739 * ``LIBOMPTARGET_JIT_SKIP_OPT=[TRUE/FALSE] (default FALSE)`` 740 * ``LIBOMPTARGET_JIT_REPLACEMENT_OBJECT=<in:Filename> (object file)`` 741 * ``LIBOMPTARGET_JIT_REPLACEMENT_MODULE=<in:Filename> (LLVM-IR file)`` 742 * ``LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE=<out:Filename> (LLVM-IR file)`` 743 * ``LIBOMPTARGET_JIT_POST_OPT_IR_MODULE=<out:Filename> (LLVM-IR file)`` 744 * ``LIBOMPTARGET_MIN_THREADS_FOR_LOW_TRIP_COUNT=<Num> (default: 32)`` 745 * ``LIBOMPTARGET_REUSE_BLOCKS_FOR_HIGH_TRIP_COUNT=[TRUE/FALSE] (default TRUE)`` 746 * ``OFFLOAD_TRACK_ALLOCATION_TRACES=[TRUE/FALSE] (default FALSE)`` 747 * ``OFFLOAD_TRACK_NUM_KERNEL_LAUNCH_TRACES=<Num> (default 0)`` 748 749LIBOMPTARGET_DEBUG 750"""""""""""""""""" 751 752``LIBOMPTARGET_DEBUG`` controls whether or not debugging information will be 753displayed. This feature is only available if ``libomptarget`` was built with 754``-DOMPTARGET_DEBUG``. The debugging output provided is intended for use by 755``libomptarget`` developers. More user-friendly output is presented when using 756``LIBOMPTARGET_INFO``. 757 758LIBOMPTARGET_PROFILE 759"""""""""""""""""""" 760 761``LIBOMPTARGET_PROFILE`` allows ``libomptarget`` to generate time profile output 762similar to Clang's ``-ftime-trace`` option. This generates a JSON file based on 763`Chrome Tracing`_ that can be viewed with ``chrome://tracing`` or the 764`Speedscope App`_. The output will be saved to the filename specified by the 765environment variable. For multi-threaded applications, profiling in ``libomp`` 766is also needed. Setting the CMake option ``OPENMP_ENABLE_LIBOMP_PROFILING=ON`` 767to enable the feature. This feature depends on the `LLVM Support Library`_ 768for time trace output. Note that this will turn ``libomp`` into a C++ library. 769 770.. _`Chrome Tracing`: https://www.chromium.org/developers/how-tos/trace-event-profiling-tool 771 772.. _`Speedscope App`: https://www.speedscope.app/ 773 774.. _`LLVM Support Library`: https://llvm.org/docs/SupportLibrary.html 775 776LIBOMPTARGET_PROFILE_GRANULARITY 777"""""""""""""""""""""""""""""""" 778 779``LIBOMPTARGET_PROFILE_GRANULARITY`` allows to change the time profile 780granularity measured in `us`. Default is 500 (`us`). 781 782LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD 783""""""""""""""""""""""""""""""""""""" 784 785``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD`` sets the threshold size for which the 786``libomptarget`` memory manager will handle the allocation. Any allocations 787larger than this threshold will not use the memory manager and be freed after 788the device kernel exits. The default threshold value is ``8KB``. If 789``LIBOMPTARGET_MEMORY_MANAGER_THRESHOLD`` is set to ``0`` the memory manager 790will be completely disabled. 791 792.. _libomptarget_info: 793 794LIBOMPTARGET_INFO 795""""""""""""""""" 796 797``LIBOMPTARGET_INFO`` allows the user to request different types of runtime 798information from ``libomptarget``. ``LIBOMPTARGET_INFO`` uses a 32-bit field to 799enable or disable different types of information. This includes information 800about data-mappings and kernel execution. It is recommended to build your 801application with debugging information enabled, this will enable filenames and 802variable declarations in the information messages. OpenMP Debugging information 803is enabled at any level of debugging so a full debug runtime is not required. 804For minimal debugging information compile with `-gline-tables-only`, or compile 805with `-g` for full debug information. A full list of flags supported by 806``LIBOMPTARGET_INFO`` is given below. 807 808 * Print all data arguments upon entering an OpenMP device kernel: ``0x01`` 809 * Indicate when a mapped address already exists in the device mapping table: 810 ``0x02`` 811 * Dump the contents of the device pointer map at kernel exit: ``0x04`` 812 * Indicate when an entry is changed in the device mapping table: ``0x08`` 813 * Print OpenMP kernel information from device plugins: ``0x10`` 814 * Indicate when data is copied to and from the device: ``0x20`` 815 816Any combination of these flags can be used by setting the appropriate bits. For 817example, to enable printing all data active in an OpenMP target region along 818with ``CUDA`` information, run the following ``bash`` command. 819 820.. code-block:: console 821 822 $ env LIBOMPTARGET_INFO=$((0x1 | 0x10)) ./your-application 823 824Or, to enable every flag run with every bit set. 825 826.. code-block:: console 827 828 $ env LIBOMPTARGET_INFO=-1 ./your-application 829 830For example, given a small application implementing the ``ZAXPY`` BLAS routine, 831``Libomptarget`` can provide useful information about data mappings and thread 832usages. 833 834.. code-block:: c++ 835 836 #include <complex> 837 838 using complex = std::complex<double>; 839 840 void zaxpy(complex *X, complex *Y, complex D, std::size_t N) { 841 #pragma omp target teams distribute parallel for 842 for (std::size_t i = 0; i < N; ++i) 843 Y[i] = D * X[i] + Y[i]; 844 } 845 846 int main() { 847 const std::size_t N = 1024; 848 complex X[N], Y[N], D; 849 #pragma omp target data map(to:X[0 : N]) map(tofrom:Y[0 : N]) 850 zaxpy(X, Y, D, N); 851 } 852 853Compiling this code targeting ``nvptx64`` with all information enabled will 854provide the following output from the runtime library. 855 856.. code-block:: console 857 858 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only zaxpy.cpp -o zaxpy 859 $ env LIBOMPTARGET_INFO=-1 ./zaxpy 860 861.. code-block:: text 862 863 Info: Entering OpenMP data region at zaxpy.cpp:14:1 with 2 arguments: 864 Info: to(X[0:N])[16384] 865 Info: tofrom(Y[0:N])[16384] 866 Info: Creating new map entry with HstPtrBegin=0x00007fff0d259a40, 867 TgtPtrBegin=0x00007fdba5800000, Size=16384, RefCount=1, Name=X[0:N] 868 Info: Copying data from host to device, HstPtr=0x00007fff0d259a40, 869 TgtPtr=0x00007fdba5800000, Size=16384, Name=X[0:N] 870 Info: Creating new map entry with HstPtrBegin=0x00007fff0d255a40, 871 TgtPtrBegin=0x00007fdba5804000, Size=16384, RefCount=1, Name=Y[0:N] 872 Info: Copying data from host to device, HstPtr=0x00007fff0d255a40, 873 TgtPtr=0x00007fdba5804000, Size=16384, Name=Y[0:N] 874 Info: OpenMP Host-Device pointer mappings after block at zaxpy.cpp:14:1: 875 Info: Host Ptr Target Ptr Size (B) RefCount Declaration 876 Info: 0x00007fff0d255a40 0x00007fdba5804000 16384 1 Y[0:N] at zaxpy.cpp:13:17 877 Info: 0x00007fff0d259a40 0x00007fdba5800000 16384 1 X[0:N] at zaxpy.cpp:13:11 878 Info: Entering OpenMP kernel at zaxpy.cpp:6:1 with 4 arguments: 879 Info: firstprivate(N)[8] (implicit) 880 Info: use_address(Y)[0] (implicit) 881 Info: tofrom(D)[16] (implicit) 882 Info: use_address(X)[0] (implicit) 883 Info: Mapping exists (implicit) with HstPtrBegin=0x00007fff0d255a40, 884 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=2 (incremented), Name=Y 885 Info: Creating new map entry with HstPtrBegin=0x00007fff0d2559f0, 886 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1, Name=D 887 Info: Copying data from host to device, HstPtr=0x00007fff0d2559f0, 888 TgtPtr=0x00007fdba5808000, Size=16, Name=D 889 Info: Mapping exists (implicit) with HstPtrBegin=0x00007fff0d259a40, 890 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=2 (incremented), Name=X 891 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40, 892 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=2 (update suppressed) 893 Info: Mapping exists with HstPtrBegin=0x00007fff0d2559f0, 894 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1 (update suppressed) 895 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40, 896 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=2 (update suppressed) 897 Info: Launching kernel __omp_offloading_10305_c08c86__Z5zaxpyPSt7complexIdES1_S0_m_l6 898 with 8 blocks and 128 threads in SPMD mode 899 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40, 900 TgtPtrBegin=0x00007fdba5800000, Size=0, RefCount=1 (decremented) 901 Info: Mapping exists with HstPtrBegin=0x00007fff0d2559f0, 902 TgtPtrBegin=0x00007fdba5808000, Size=16, RefCount=1 (deferred final decrement) 903 Info: Copying data from device to host, TgtPtr=0x00007fdba5808000, 904 HstPtr=0x00007fff0d2559f0, Size=16, Name=D 905 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40, 906 TgtPtrBegin=0x00007fdba5804000, Size=0, RefCount=1 (decremented) 907 Info: Removing map entry with HstPtrBegin=0x00007fff0d2559f0, 908 TgtPtrBegin=0x00007fdba5808000, Size=16, Name=D 909 Info: OpenMP Host-Device pointer mappings after block at zaxpy.cpp:6:1: 910 Info: Host Ptr Target Ptr Size (B) RefCount Declaration 911 Info: 0x00007fff0d255a40 0x00007fdba5804000 16384 1 Y[0:N] at zaxpy.cpp:13:17 912 Info: 0x00007fff0d259a40 0x00007fdba5800000 16384 1 X[0:N] at zaxpy.cpp:13:11 913 Info: Exiting OpenMP data region at zaxpy.cpp:14:1 with 2 arguments: 914 Info: to(X[0:N])[16384] 915 Info: tofrom(Y[0:N])[16384] 916 Info: Mapping exists with HstPtrBegin=0x00007fff0d255a40, 917 TgtPtrBegin=0x00007fdba5804000, Size=16384, RefCount=1 (deferred final decrement) 918 Info: Copying data from device to host, TgtPtr=0x00007fdba5804000, 919 HstPtr=0x00007fff0d255a40, Size=16384, Name=Y[0:N] 920 Info: Mapping exists with HstPtrBegin=0x00007fff0d259a40, 921 TgtPtrBegin=0x00007fdba5800000, Size=16384, RefCount=1 (deferred final decrement) 922 Info: Removing map entry with HstPtrBegin=0x00007fff0d255a40, 923 TgtPtrBegin=0x00007fdba5804000, Size=16384, Name=Y[0:N] 924 Info: Removing map entry with HstPtrBegin=0x00007fff0d259a40, 925 TgtPtrBegin=0x00007fdba5800000, Size=16384, Name=X[0:N] 926 927From this information, we can see the OpenMP kernel being launched on the CUDA 928device with enough threads and blocks for all ``1024`` iterations of the loop in 929simplified :doc:`SPMD Mode <Offloading>`. The information from the OpenMP data 930region shows the two arrays ``X`` and ``Y`` being copied from the host to the 931device. This creates an entry in the host-device mapping table associating the 932host pointers to the newly created device data. The data mappings in the OpenMP 933device kernel show the default mappings being used for all the variables used 934implicitly on the device. Because ``X`` and ``Y`` are already mapped in the 935device's table, no new entries are created. Additionally, the default mapping 936shows that ``D`` will be copied back from the device once the OpenMP device 937kernel region ends even though it isn't written to. Finally, at the end of the 938OpenMP data region the entries for ``X`` and ``Y`` are removed from the table. 939 940The information level can be controlled at runtime using an internal 941libomptarget library call ``__tgt_set_info_flag``. This allows for different 942levels of information to be enabled or disabled for certain regions of code. 943Using this requires declaring the function signature as an external function so 944it can be linked with the runtime library. 945 946.. code-block:: c++ 947 948 extern "C" void __tgt_set_info_flag(uint32_t); 949 950 extern foo(); 951 952 int main() { 953 __tgt_set_info_flag(0x10); 954 #pragma omp target 955 foo(); 956 } 957 958.. _libopenmptarget_errors: 959 960Errors: 961^^^^^^^ 962 963``libomptarget`` provides error messages when the program fails inside the 964OpenMP target region. Common causes of failure could be an invalid pointer 965access, running out of device memory, or trying to offload when the device is 966busy. If the application was built with debugging symbols the error messages 967will additionally provide the source location of the OpenMP target region. 968 969For example, consider the following code that implements a simple parallel 970reduction on the GPU. This code has a bug that causes it to fail in the 971offloading region. 972 973.. code-block:: c++ 974 975 #include <cstdio> 976 977 double sum(double *A, std::size_t N) { 978 double sum = 0.0; 979 #pragma omp target teams distribute parallel for reduction(+:sum) 980 for (int i = 0; i < N; ++i) 981 sum += A[i]; 982 983 return sum; 984 } 985 986 int main() { 987 const int N = 1024; 988 double A[N]; 989 sum(A, N); 990 } 991 992If this code is compiled and run, there will be an error message indicating what is 993going wrong. 994 995.. code-block:: console 996 997 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only sum.cpp -o sum 998 $ ./sum 999 1000.. code-block:: text 1001 1002 CUDA error: an illegal memory access was encountered 1003 Libomptarget error: Copying data from device failed. 1004 Libomptarget error: Call to targetDataEnd failed, abort target. 1005 Libomptarget error: Failed to process data after launching the kernel. 1006 Libomptarget error: Consult https://openmp.llvm.org/design/Runtimes.html for debugging options. 1007 sum.cpp:5:1: Libomptarget error 1: failure of target construct while offloading is mandatory 1008 1009This shows that there is an illegal memory access occurring inside the OpenMP 1010target region once execution has moved to the CUDA device, suggesting a 1011segmentation fault. This then causes a chain reaction of failures in 1012``libomptarget``. Another message suggests using the ``LIBOMPTARGET_INFO`` 1013environment variable as described in :ref:`libopenmptarget_environment_vars`. If 1014we do this it will print the sate of the host-target pointer mappings at the 1015time of failure. 1016 1017.. code-block:: console 1018 1019 $ clang++ -fopenmp -fopenmp-targets=nvptx64 -O3 -gline-tables-only sum.cpp -o sum 1020 $ env LIBOMPTARGET_INFO=4 ./sum 1021 1022.. code-block:: text 1023 1024 info: OpenMP Host-Device pointer mappings after block at sum.cpp:5:1: 1025 info: Host Ptr Target Ptr Size (B) RefCount Declaration 1026 info: 0x00007ffc058280f8 0x00007f4186600000 8 1 sum at sum.cpp:4:10 1027 1028This tells us that the only data mapped between the host and the device is the 1029``sum`` variable that will be copied back from the device once the reduction has 1030ended. There is no entry mapping the host array ``A`` to the device. In this 1031situation, the compiler cannot determine the size of the array at compile time 1032so it will simply assume that the pointer is mapped on the device already by 1033default. The solution is to add an explicit map clause in the target region. 1034 1035.. code-block:: c++ 1036 1037 double sum(double *A, std::size_t N) { 1038 double sum = 0.0; 1039 #pragma omp target teams distribute parallel for reduction(+:sum) map(to:A[0 : N]) 1040 for (int i = 0; i < N; ++i) 1041 sum += A[i]; 1042 1043 return sum; 1044 } 1045 1046LIBOMPTARGET_STACK_SIZE 1047""""""""""""""""""""""" 1048 1049This environment variable sets the stack size in bytes for the AMDGPU and CUDA 1050plugins. This can be used to increase or decrease the standard amount of memory 1051reserved for each thread's stack. 1052 1053LIBOMPTARGET_HEAP_SIZE 1054""""""""""""""""""""""" 1055 1056This environment variable sets the amount of memory in bytes that can be 1057allocated using ``malloc`` and ``free`` for the CUDA plugin. This is necessary 1058for some applications that allocate too much memory either through the user or 1059globalization. 1060 1061LIBOMPTARGET_SHARED_MEMORY_SIZE 1062""""""""""""""""""""""""""""""" 1063 1064This environment variable sets the amount of dynamic shared memory in bytes used 1065by the kernel once it is launched. A pointer to the dynamic memory buffer can be 1066accessed using the ``llvm_omp_target_dynamic_shared_alloc`` function. An example 1067is shown in :ref:`libomptarget_dynamic_shared`. 1068 1069.. toctree:: 1070 :hidden: 1071 :maxdepth: 1 1072 1073 Offloading 1074 1075 1076LIBOMPTARGET_MAP_FORCE_ATOMIC 1077""""""""""""""""""""""""""""" 1078 1079The OpenMP standard guarantees that map clauses are atomic. However, the this 1080can have a drastic performance impact. Users that do not require atomic map 1081clauses can disable them to potentially recover lost performance. As a 1082consequence, users have to guarantee themselves that no two map clauses will 1083concurrently map the same memory. If the memory is already mapped and the 1084map clauses will only modify the reference counter from a non-zero count to 1085another non-zero count, concurrent map clauses are supported regardless of 1086this option. To disable forced atomic map clauses use "false"/"FALSE" as the 1087value of the ``LIBOMPTARGET_MAP_FORCE_ATOMIC`` environment variable. 1088The default behavior of LLVM 14 is to force atomic maps clauses, prior versions 1089of LLVM did not. 1090 1091.. _libomptarget_jit_opt_level: 1092 1093LIBOMPTARGET_JIT_OPT_LEVEL 1094"""""""""""""""""""""""""" 1095 1096This environment variable can be used to change the optimization pipeline used 1097to optimize the embedded device code as part of the device JIT. The value is 1098corresponds to the ``-O{0,1,2,3}`` command line argument passed to ``clang``. 1099 1100LIBOMPTARGET_JIT_SKIP_OPT 1101"""""""""""""""""""""""""" 1102 1103This environment variable can be used to skip the optimization pipeline during 1104JIT compilation. If set, the image will only be passed through the backend. The 1105backend is invoked with the ``LIBOMPTARGET_JIT_OPT_LEVEL`` flag. 1106 1107LIBOMPTARGET_JIT_REPLACEMENT_OBJECT 1108""""""""""""""""""""""""""""""""""" 1109 1110This environment variable can be used to replace the embedded device code 1111before the device JIT finishes compilation for the target. The value is 1112expected to be a filename to an object file, thus containing the output of the 1113assembler in object format for the respective target. The JIT optimization 1114pipeline and backend are skipped and only target specific post-processing is 1115performed on the object file before it is loaded onto the device. 1116 1117.. _libomptarget_jit_replacement_module: 1118 1119LIBOMPTARGET_JIT_REPLACEMENT_MODULE 1120""""""""""""""""""""""""""""""""""" 1121 1122This environment variable can be used to replace the embedded device code 1123before the device JIT finishes compilation for the target. The value is 1124expected to be a filename to an LLVM-IR file, thus containing an LLVM-IR module 1125for the respective target. To obtain a device code image compatible with the 1126embedded one it is recommended to extract the embedded one either before or 1127after IR optimization. This can be done at compile time, after compile time via 1128llvm tools (llvm-objdump), or, simply, by setting the 1129:ref:`LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE` or 1130:ref:`LIBOMPTARGET_JIT_POST_OPT_IR_MODULE` environment variables. 1131 1132.. _libomptarget_jit_pre_opt_ir_module: 1133 1134LIBOMPTARGET_JIT_PRE_OPT_IR_MODULE 1135"""""""""""""""""""""""""""""""""" 1136 1137This environment variable can be used to extract the embedded device code 1138before the device JIT runs additional IR optimizations on it (see 1139:ref:`LIBOMPTARGET_JIT_OPT_LEVEL`). The value is expected to be a filename into 1140which the LLVM-IR module is written. The module can be the analyzed, and 1141transformed and loaded back into the JIT pipeline via 1142:ref:`LIBOMPTARGET_JIT_REPLACEMENT_MODULE`. 1143 1144.. _libomptarget_jit_post_opt_ir_module: 1145 1146LIBOMPTARGET_JIT_POST_OPT_IR_MODULE 1147""""""""""""""""""""""""""""""""""" 1148 1149This environment variable can be used to extract the embedded device code after 1150the device JIT runs additional IR optimizations on it (see 1151:ref:`LIBOMPTARGET_JIT_OPT_LEVEL`). The value is expected to be a filename into 1152which the LLVM-IR module is written. The module can be the analyzed, and 1153transformed and loaded back into the JIT pipeline via 1154:ref:`LIBOMPTARGET_JIT_REPLACEMENT_MODULE`. 1155 1156 1157LIBOMPTARGET_MIN_THREADS_FOR_LOW_TRIP_COUNT 1158""""""""""""""""""""""""""""""""""""""""""" 1159 1160This environment variable defines a lower bound for the number of threads if a 1161combined kernel, e.g., `target teams distribute parallel for`, has insufficient 1162parallelism. Especially if the trip count of the loops is lower than the number 1163of threads possible times the number of teams (aka. blocks) the device prefers 1164(see also :ref:`LIBOMPTARGET_AMDGPU_TEAMS_PER_CU`), we will reduce the thread 1165count to increase outer (team/block) parallelism. The thread count will never 1166be reduced below the value passed for this environment variable though. 1167 1168LIBOMPTARGET_REUSE_BLOCKS_FOR_HIGH_TRIP_COUNT 1169""""""""""""""""""""""""""""""""""""""""""""" 1170 1171This environment variable can be used to control how the OpenMP runtime assigns 1172blocks to loops with high trip counts. By default we reuse existing blocks 1173rather than spawning new blocks. 1174 1175OFFLOAD_TRACK_ALLOCATION_TRACES 1176""""""""""""""""""""""""""""""" 1177 1178This environment variable determines if the stack traces of allocations and 1179deallocations are tracked to aid in error reporting, e.g., in case of 1180double-free. 1181 1182OFFLOAD_TRACK_KERNEL_LAUNCH_TRACES 1183"""""""""""""""""""""""""""""""""" 1184 1185This environment variable determines how manytstack traces of kernel launches 1186are tracked to aid in error reporting, e.g., what asynchronous kernel failed. 1187 1188.. _libomptarget_plugin: 1189 1190LLVM/OpenMP Target Host Runtime Plugins (``libomptarget.rtl.XXXX``) 1191------------------------------------------------------------------- 1192 1193The LLVM/OpenMP target host runtime plugins were recently re-implemented, 1194temporarily renamed as the NextGen plugins, and set as the default and only 1195plugins' implementation. Currently, these plugins have support for the NVIDIA 1196and AMDGPU devices as well as the GenericELF64bit host-simulated device. 1197 1198The source code of the common infrastructure and the vendor-specific plugins is 1199in the ``openmp/libomptarget/nextgen-plugins`` directory in the LLVM project 1200repository. The plugin infrastructure aims at unifying the plugin code and logic 1201into a generic interface using object-oriented C++. There is a plugin interface 1202composed by multiple generic C++ classes which implement the common logic that 1203every vendor-specific plugin should provide. In turn, the specific plugins 1204inherit from those generic classes and implement the required functions that 1205depend on the specific vendor API. As an example, some generic classes that the 1206plugin interface define are for representing a device, a device image, an 1207efficient resource manager, etc. 1208 1209With this common plugin infrastructure, several tasks have been simplified: 1210adding a new vendor-specific plugin, adding generic features or optimizations 1211to all plugins, debugging plugins, etc. 1212 1213Environment Variables 1214^^^^^^^^^^^^^^^^^^^^^ 1215 1216There are several environment variables to change the behavior of the plugins: 1217 1218* ``LIBOMPTARGET_SHARED_MEMORY_SIZE`` 1219* ``LIBOMPTARGET_STACK_SIZE`` 1220* ``LIBOMPTARGET_HEAP_SIZE`` 1221* ``LIBOMPTARGET_NUM_INITIAL_STREAMS`` 1222* ``LIBOMPTARGET_NUM_INITIAL_EVENTS`` 1223* ``LIBOMPTARGET_LOCK_MAPPED_HOST_BUFFERS`` 1224* ``LIBOMPTARGET_AMDGPU_NUM_HSA_QUEUES`` 1225* ``LIBOMPTARGET_AMDGPU_HSA_QUEUE_SIZE`` 1226* ``LIBOMPTARGET_AMDGPU_HSA_QUEUE_BUSY_TRACKING`` 1227* ``LIBOMPTARGET_AMDGPU_TEAMS_PER_CU`` 1228* ``LIBOMPTARGET_AMDGPU_MAX_ASYNC_COPY_BYTES`` 1229* ``LIBOMPTARGET_AMDGPU_NUM_INITIAL_HSA_SIGNALS`` 1230* ``LIBOMPTARGET_AMDGPU_STREAM_BUSYWAIT`` 1231 1232The environment variables ``LIBOMPTARGET_SHARED_MEMORY_SIZE``, 1233``LIBOMPTARGET_STACK_SIZE`` and ``LIBOMPTARGET_HEAP_SIZE`` are described in 1234:ref:`libopenmptarget_environment_vars`. 1235 1236LIBOMPTARGET_NUM_INITIAL_STREAMS 1237"""""""""""""""""""""""""""""""" 1238 1239This environment variable sets the number of pre-created streams in the plugin 1240(if supported) at initialization. More streams will be created dynamically 1241throughout the execution if needed. A stream is a queue of asynchronous 1242operations (e.g., kernel launches and memory copies) that are executed 1243sequentially. Parallelism is achieved by featuring multiple streams. The 1244``libomptarget`` leverages streams to exploit parallelism between plugin 1245operations. The default value is ``1``, more streams are created as needed. 1246 1247LIBOMPTARGET_NUM_INITIAL_EVENTS 1248""""""""""""""""""""""""""""""" 1249 1250This environment variable sets the number of pre-created events in the 1251plugin (if supported) at initialization. More events will be created 1252dynamically throughout the execution if needed. An event is used to synchronize 1253a stream with another efficiently. The default value is ``1``, more events are 1254created as needed. 1255 1256LIBOMPTARGET_LOCK_MAPPED_HOST_BUFFERS 1257""""""""""""""""""""""""""""""""""""" 1258 1259This environment variable indicates whether the host buffers mapped by the user 1260should be automatically locked/pinned by the plugin. Pinned host buffers allow 1261true asynchronous copies between the host and devices. Enabling this feature can 1262increase the performance of applications that are intensive in host-device 1263memory transfers. The default value is ``false``. 1264 1265LIBOMPTARGET_AMDGPU_NUM_HSA_QUEUES 1266"""""""""""""""""""""""""""""""""" 1267 1268This environment variable controls the number of HSA queues per device in the 1269AMDGPU plugin. An HSA queue is a runtime-allocated resource that contains an 1270AQL (Architected Queuing Language) packet buffer and is associated with an AQL 1271packet processor. HSA queues are used for inserting kernel packets to launching 1272kernel executions. A high number of HSA queues may degrade the performance. The 1273default value is ``4``. 1274 1275LIBOMPTARGET_AMDGPU_HSA_QUEUE_SIZE 1276"""""""""""""""""""""""""""""""""" 1277 1278This environment variable controls the size of each HSA queue in the AMDGPU 1279plugin. The size is the number of AQL packets an HSA queue is expected to hold. 1280It is also the number of AQL packets that can be pushed into each queue without 1281waiting the driver to process them. The default value is ``512``. 1282 1283LIBOMPTARGET_AMDGPU_HSA_QUEUE_BUSY_TRACKING 1284""""""""""""""""""""""""""""""""""""""""""" 1285 1286This environment variable controls if idle HSA queues will be preferentially 1287assigned to streams, for example when they are requested for a kernel launch. 1288Should all queues be considered busy, a new queue is initialized and returned, 1289until we reach the set maximum. Otherwise, we will select the least utilized 1290queue. If this is disabled, each time a stream is requested a new HSA queue 1291will be initialized, regardless of their utilization. Additionally, queues will 1292be selected using round robin selection. The default value is ``true``. 1293 1294.. _libomptarget_amdgpu_teams_per_cu: 1295 1296LIBOMPTARGET_AMDGPU_TEAMS_PER_CU 1297"""""""""""""""""""""""""""""""" 1298 1299This environment variable controls the default number of teams relative to the 1300number of compute units (CUs) of the AMDGPU device. The default number of teams 1301is ``#default_teams = #teams_per_CU * #CUs``. The default value of teams per CU 1302is ``4``. 1303 1304LIBOMPTARGET_AMDGPU_MAX_ASYNC_COPY_BYTES 1305"""""""""""""""""""""""""""""""""""""""" 1306 1307This environment variable specifies the maximum size in bytes where the memory 1308copies are asynchronous operations in the AMDGPU plugin. Up to this transfer 1309size, the memory copies are asynchronous operations pushed to the corresponding 1310stream. For larger transfers, they are synchronous transfers. Memory copies 1311involving already locked/pinned host buffers are always asynchronous. The default 1312value is ``1*1024*1024`` bytes (1 MB). 1313 1314LIBOMPTARGET_AMDGPU_NUM_INITIAL_HSA_SIGNALS 1315""""""""""""""""""""""""""""""""""""""""""" 1316 1317This environment variable controls the initial number of HSA signals per device 1318in the AMDGPU plugin. There is one resource manager of signals per device 1319managing several pre-created signals. These signals are mainly used by AMDGPU 1320streams. More HSA signals will be created dynamically throughout the execution 1321if needed. The default value is ``64``. 1322 1323LIBOMPTARGET_AMDGPU_STREAM_BUSYWAIT 1324""""""""""""""""""""""""""""""""""" 1325 1326This environment variable controls the timeout hint in microseconds for the 1327HSA wait state within the AMDGPU plugin. For the duration of this value 1328the HSA runtime may busy wait. This can reduce overall latency. 1329The default value is ``2000000``. 1330 1331.. _remote_offloading_plugin: 1332 1333Remote Offloading Plugin: 1334^^^^^^^^^^^^^^^^^^^^^^^^^ 1335 1336The remote offloading plugin permits the execution of OpenMP target regions 1337on devices in remote hosts in addition to the devices connected to the local 1338host. All target devices on the remote host will be exposed to the 1339application as if they were local devices, that is, the remote host CPU or 1340its GPUs can be offloaded to with the appropriate device number. If the 1341server is running on the same host, each device may be identified twice: 1342once through the device plugins and once through the device plugins that the 1343server application has access to. 1344 1345This plugin consists of ``libomptarget.rtl.rpc.so`` and 1346``openmp-offloading-server`` which should be running on the (remote) host. The 1347server application does not have to be running on a remote host, and can 1348instead be used on the same host in order to debug memory mapping during offloading. 1349These are implemented via gRPC/protobuf so these libraries are required to 1350build and use this plugin. The server must also have access to the necessary 1351target-specific plugins in order to perform the offloading. 1352 1353Due to the experimental nature of this plugin, the CMake variable 1354``LIBOMPTARGET_ENABLE_EXPERIMENTAL_REMOTE_PLUGIN`` must be set in order to 1355build this plugin. For example, the rpc plugin is not designed to be 1356thread-safe, the server cannot concurrently handle offloading from multiple 1357applications at once (it is synchronous) and will terminate after a single 1358execution. Note that ``openmp-offloading-server`` is unable to 1359remote offload onto a remote host itself and will error out if this is attempted. 1360 1361Remote offloading is configured via environment variables at runtime of the OpenMP application: 1362 * ``LIBOMPTARGET_RPC_ADDRESS=<Address>:<Port>`` 1363 * ``LIBOMPTARGET_RPC_ALLOCATOR_MAX=<NumBytes>`` 1364 * ``LIBOMPTARGET_BLOCK_SIZE=<NumBytes>`` 1365 * ``LIBOMPTARGET_RPC_LATENCY=<Seconds>`` 1366 1367LIBOMPTARGET_RPC_ADDRESS 1368"""""""""""""""""""""""" 1369The address and port at which the server is running. This needs to be set for 1370the server and the application, the default is ``0.0.0.0:50051``. A single 1371OpenMP executable can offload onto multiple remote hosts by setting this to 1372comma-separated values of the addresses. 1373 1374LIBOMPTARGET_RPC_ALLOCATOR_MAX 1375"""""""""""""""""""""""""""""" 1376After allocating this size, the protobuf allocator will clear. This can be set for both endpoints. 1377 1378LIBOMPTARGET_BLOCK_SIZE 1379""""""""""""""""""""""" 1380This is the maximum size of a single message while streaming data transfers between the two endpoints and can be set for both endpoints. 1381 1382LIBOMPTARGET_RPC_LATENCY 1383"""""""""""""""""""""""" 1384This is the maximum amount of time the client will wait for a response from the server. 1385 1386 1387.. _libomptarget_libc: 1388 1389LLVM/OpenMP support for C library routines 1390^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1391 1392Support for calling standard C library routines on GPU targets is provided by 1393the `LLVM C Library <https://libc.llvm.org/gpu/>`_. This project provides two 1394static libraries, ``libcgpu.a`` and ``libllvmlibc_rpc_server.a``, which are used 1395by the OpenMP runtime to provide ``libc`` support. The ``libcgpu.a`` library 1396contains the GPU device code, while ``libllvmlibc_rpc_server.a`` provides the 1397interface to the RPC interface. More information on the RPC construction can be 1398found in the `associated documentation <https://libc.llvm.org/gpu/rpc.html>`_. 1399 1400To provide host services, we run an RPC server inside of the runtime. This 1401allows the host to respond to requests made from the GPU asynchronously. For 1402``libc`` calls that require an RPC server, such as printing, an external handle 1403to the RPC client running on the GPU will be present in the GPU executable. If 1404we find this symbol, we will initialize a client and server and run it in the 1405background while the kernel is executing. 1406 1407For example, consider the following simple OpenMP offloading code. Here we will 1408simply print a string to the user from the GPU. 1409 1410.. code-block:: c++ 1411 1412 #include <stdio.h> 1413 1414 int main() { 1415 #pragma omp target 1416 { fputs("Hello World!\n", stderr); } 1417 } 1418 1419We can compile this using the ``libcgpu.a`` library to resolve the symbols. 1420Because this function requires RPC support, this will also pull in an externally 1421visible symbol called ``__llvm_libc_rpc_client`` into the device image. When 1422loading the device image, the runtime will check for this symbol and initialize 1423an RPC interface if it is found. The following example shows the RPC server 1424being used. 1425 1426.. code-block:: console 1427 1428 $ clang++ hello.c -fopenmp --offload-arch=gfx90a -lcgpu 1429 $ env LIBOMPTARGET_DEBUG=1 ./a.out 1430 PluginInterface --> Running an RPC server on device 0 1431 ... 1432 Hello World! 1433 1434.. _libomptarget_device: 1435 1436LLVM/OpenMP Target Device Runtime (``libomptarget-ARCH-SUBARCH.bc``) 1437-------------------------------------------------------------------- 1438 1439The target device runtime is an LLVM bitcode library that implements OpenMP 1440runtime functions on the target device. It is linked with the device code's LLVM 1441IR during compilation. 1442 1443.. _libomptarget_dynamic_shared: 1444 1445Dynamic Shared Memory 1446^^^^^^^^^^^^^^^^^^^^^ 1447 1448The target device runtime contains a pointer to the dynamic shared memory 1449buffer. This pointer can be obtained using the 1450``llvm_omp_target_dynamic_shared_alloc`` extension. If this function is called 1451from the host it will simply return a null pointer. In order to use this buffer 1452the kernel must be launched with an adequate amount of dynamic shared memory 1453allocated. This can be done using the ``LIBOMPTARGET_SHARED_MEMORY_SIZE`` 1454environment variable or the ``ompx_dyn_cgroup_mem(<N>)`` target directive 1455clause. Examples for both are given below. 1456 1457.. code-block:: c++ 1458 1459 void foo() { 1460 int x; 1461 #pragma omp target parallel map(from : x) 1462 { 1463 int *buf = llvm_omp_target_dynamic_shared_alloc(); 1464 if (omp_get_thread_num() == 0) 1465 *buf = 1; 1466 #pragma omp barrier 1467 if (omp_get_thread_num() == 1) 1468 x = *buf; 1469 } 1470 assert(x == 1); 1471 } 1472 1473.. code-block:: console 1474 1475 $ clang++ -fopenmp --offload-arch=sm_80 -O3 shared.c 1476 $ env LIBOMPTARGET_SHARED_MEMORY_SIZE=256 ./shared 1477 1478.. code-block:: c++ 1479 1480 void foo(int N) { 1481 int x; 1482 #pragma omp target parallel map(from : x) ompx_dyn_cgroup_mem(N * sizeof(int)) 1483 { 1484 int *buf = llvm_omp_target_dynamic_shared_alloc(); 1485 if (omp_get_thread_num() == 0) 1486 buf[N - 1] = 1; 1487 #pragma omp barrier 1488 if (omp_get_thread_num() == 1) 1489 x = buf[N - 1]; 1490 } 1491 assert(x == 1); 1492 } 1493 1494.. code-block:: console 1495 1496 $ clang++ -fopenmp --offload-arch=gfx90a -O3 shared.c 1497 $ env ./shared 1498 1499.. _libomptarget_device_allocator: 1500 1501Device Allocation 1502^^^^^^^^^^^^^^^^^ 1503 1504The device runtime supports basic runtime allocation via the ``omp_alloc`` 1505function. Currently, this allocates global memory for all default traits. Access 1506modifiers are currently not supported and return a null pointer. 1507 1508.. _libomptarget_device_debugging: 1509 1510Debugging 1511^^^^^^^^^ 1512 1513The device runtime supports debugging in the runtime itself. This is configured 1514at compile-time using the flag ``-fopenmp-target-debug=<N>`` rather than using a 1515separate debugging build. If debugging is not enabled, the debugging paths will 1516be considered trivially dead and removed by the compiler with zero overhead. 1517Debugging is enabled at runtime by running with the environment variable 1518``LIBOMPTARGET_DEVICE_RTL_DEBUG=<N>`` set. The number set is a 32-bit field used 1519to selectively enable and disable different features. Currently, the following 1520debugging features are supported. 1521 1522 * Enable debugging assertions in the device. ``0x01`` 1523 * Enable diagnosing common problems during offloading . ``0x4`` 1524 * Enable device malloc statistics (amdgpu only). ``0x8`` 1525