Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1 |
|
#
6594f428 |
| 03-Mar-2024 |
Mehdi Amini <joker.eph@gmail.com> |
Split the llvm::ThreadPool into an abstract base class and an implementation (#82094)
This decouples the public API used to enqueue tasks and wait for
completion from the actual implementation, and
Split the llvm::ThreadPool into an abstract base class and an implementation (#82094)
This decouples the public API used to enqueue tasks and wait for
completion from the actual implementation, and opens up the possibility
for clients to set their own thread pool implementation for the pool.
https://discourse.llvm.org/t/construct-threadpool-from-vector-of-existing-threads/76883
show more ...
|
Revision tags: llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3 |
|
#
1215ff89 |
| 16-Feb-2023 |
Benoit Jacob <benoitjacob@google.com> |
Name threadpool threads.
This gives our worker threads some names that allow easily identifying them in tools such as profilers and debuggers.
Differential Revision: https://reviews.llvm.org/D144297
|
Revision tags: llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0 |
|
#
6ed2cb4a |
| 29-Aug-2022 |
Kazu Hirata <kazu@google.com> |
Revert "[llvm] Use llvm::is_contained (NFC)"
This reverts commit ebf574f59a80ca00e234eee0b047e5f0df99587d.
This patch seems to cause build failures on Windows.
|
#
ebf574f5 |
| 29-Aug-2022 |
Kazu Hirata <kazu@google.com> |
[llvm] Use llvm::is_contained (NFC)
|
Revision tags: llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4 |
|
#
9c34a16c |
| 04-May-2022 |
Luboš Luňák <l.lunak@centrum.cz> |
[ThreadPool] delete debug global variable if not needed
https://lab.llvm.org/buildbot/#/builders/5/builds/23099
|
Revision tags: llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1 |
|
#
8ef5710e |
| 05-Apr-2022 |
Luboš Luňák <l.lunak@centrum.cz> |
[ThreadPool] add ability to group tasks into separate groups
This is needed for parallelizing of loading modules symbols in LLDB (D122975). Currently LLDB can parallelize indexing symbols when loadi
[ThreadPool] add ability to group tasks into separate groups
This is needed for parallelizing of loading modules symbols in LLDB (D122975). Currently LLDB can parallelize indexing symbols when loading a module, but modules are loaded sequentially. If LLDB index cache is enabled, this means that the cache loading is not parallelized, even though it could. However doing that creates a threadpool-within-threadpool situation, so the number of threads would not be properly limited.
This change adds ThreadPoolTaskGroup as a simple type that can be used with ThreadPool calls to put tasks into groups that can be independently waited for (even recursively from within a task) but still run in the same thread pool.
Differential Revision: https://reviews.llvm.org/D123225
show more ...
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init |
|
#
0984aa70 |
| 26-Jan-2022 |
serge-sans-paille <sguelton@redhat.com> |
Fix conditional include in ThreadPool
Should fix https://lab.llvm.org/buildbot#builders/37/builds/10259
|
#
66c602be |
| 26-Jan-2022 |
serge-sans-paille <sguelton@redhat.com> |
[NFC] Additional header dependency cleanup LLVMSupport
A few more forward-declarations, a few less headers. the impact on number of preprocessed lines for LLVMSupport is negligible (-3K lines) but i
[NFC] Additional header dependency cleanup LLVMSupport
A few more forward-declarations, a few less headers. the impact on number of preprocessed lines for LLVMSupport is negligible (-3K lines) but it's always good to remove dependencies.
Related discourse thread: https://llvm.discourse.group/t/include-what-you-use-include-cleanup
show more ...
|
#
1613f8b8 |
| 22-Jan-2022 |
Mitch Phillips <31459023+hctim@users.noreply.github.com> |
NFC (build fix): Add header for llvm::errs().
Looks like e9211e039377 unfortunately broke the sanitizer build bots, because those bots compile the symbolizer with DLLVM_ENABLE_THREADS=Off. Likely, b
NFC (build fix): Add header for llvm::errs().
Looks like e9211e039377 unfortunately broke the sanitizer build bots, because those bots compile the symbolizer with DLLVM_ENABLE_THREADS=Off. Likely, before the patch, this header was transitively included.
show more ...
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3 |
|
#
75e164f6 |
| 20-Jan-2022 |
serge-sans-paille <sguelton@redhat.com> |
[llvm] Cleanup header dependencies in ADT and Support
The cleanup was manual, but assisted by "include-what-you-use". It consists in
1. Removing unused forward declaration. No impact expected. 2. R
[llvm] Cleanup header dependencies in ADT and Support
The cleanup was manual, but assisted by "include-what-you-use". It consists in
1. Removing unused forward declaration. No impact expected. 2. Removing unused headers in .cpp files. No impact expected. 3. Removing unused headers in .h files. This removes implicit dependencies and is generally considered a good thing, but this may break downstream builds. I've updated llvm, clang, lld, lldb and mlir deps, and included a list of the modification in the second part of the commit. 4. Replacing header inclusion by forward declaration. This has the same impact as 3.
Notable changes:
- llvm/Support/TargetParser.h no longer includes llvm/Support/AArch64TargetParser.h nor llvm/Support/ARMTargetParser.h - llvm/Support/TypeSize.h no longer includes llvm/Support/WithColor.h - llvm/Support/YAMLTraits.h no longer includes llvm/Support/Regex.h - llvm/ADT/SmallVector.h no longer includes llvm/Support/MemAlloc.h nor llvm/Support/ErrorHandling.h
You may need to add some of these headers in your compilation units, if needs be.
As an hint to the impact of the cleanup, running
clang++ -E -Iinclude -I../llvm/include ../llvm/lib/Support/*.cpp -std=c++14 -fno-rtti -fno-exceptions | wc -l
before: 8000919 lines after: 7917500 lines
Reduced dependencies also helps incremental rebuilds and is more ccache friendly, something not shown by the above metric :-)
Discourse thread on the topic: https://llvm.discourse.group/t/include-what-you-use-include-cleanup/5831
show more ...
|
#
e5c944b4 |
| 17-Jan-2022 |
Fangrui Song <i@maskray.me> |
[Support] Fix -Wreturn-type in LLVM_ENABLE_THREADS=OFF build after D116846
|
Revision tags: llvmorg-13.0.1-rc2 |
|
#
d9547f41 |
| 08-Jan-2022 |
John Demme <john.demme@microsoft.com> |
[MLIR] Fix compilation with LLVM_ENABLE_THREADS=OFF
Currently, compiles with LLVM_ENABLE_THREADS=OFF fail due to this symbol missing. Add it but assert as calling code is (and should be) checking th
[MLIR] Fix compilation with LLVM_ENABLE_THREADS=OFF
Currently, compiles with LLVM_ENABLE_THREADS=OFF fail due to this symbol missing. Add it but assert as calling code is (and should be) checking that threading is enabled.
Differential Revision: https://reviews.llvm.org/D116846
show more ...
|
#
e8469718 |
| 03-Dec-2021 |
Mehdi Amini <joker.eph@gmail.com> |
Split the locking of the queue and the threads vector in the ThreadPool implementation
This allows to release the QueueLock early and create Thread independently of the queue processing.
Differenti
Split the locking of the queue and the threads vector in the ThreadPool implementation
This allows to release the QueueLock early and create Thread independently of the queue processing.
Differential Revision: https://reviews.llvm.org/D115078
show more ...
|
#
b28f317c |
| 04-Dec-2021 |
Mehdi Amini <joker.eph@gmail.com> |
Fix build for ThreadPool when using -DLLVM_ENABLE_THREADS=OFF
Differential Revision: https://reviews.llvm.org/D115019
|
#
728b982b |
| 03-Dec-2021 |
Benoit Jacob <benoitjacob@google.com> |
ThreadPool: grow the pool only as needed
On my 96-core cloudtop 'machine', it seems unnecessary to always start 96 threads upfront... particularly as the ThreadPool is created even with -mlir-disabl
ThreadPool: grow the pool only as needed
On my 96-core cloudtop 'machine', it seems unnecessary to always start 96 threads upfront... particularly as the ThreadPool is created even with -mlir-disable-threading. Things like the resuling spew in GDB and the obfuscated output of `(gdb) info threads` are my motivation here, but it probably also doesn't hurt for at least some efficiency metrics to avoid creating many threads upfront.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D115019
show more ...
|
#
8cb1af73 |
| 25-Nov-2021 |
Florian Hahn <flo@fhahn.com> |
Recommit [ThreadPool] Support returning futures with results.
This reverts commit 71a7c55f0f021b04b9a7303d0cd391b9161cf303.
The revert broken building llvm-reduce and it is not clear it fixes an is
Recommit [ThreadPool] Support returning futures with results.
This reverts commit 71a7c55f0f021b04b9a7303d0cd391b9161cf303.
The revert broken building llvm-reduce and it is not clear it fixes an issue with LLVM_ENABLE_THREADS=OFF.
See discussion in https://reviews.llvm.org/D114183 for more details.
show more ...
|
#
71a7c55f |
| 25-Nov-2021 |
Daniel McIntosh <Daniel.McIntosh@ibm.com> |
Revert "[ThreadPool] Support returning futures with results."
This reverts commit 6149e57dc1313d32c85524f8009a1249e0b8f4d1.
The offending commit broke building with LLVM_ENABLE_THREADS=OFF.
|
Revision tags: llvmorg-13.0.1-rc1 |
|
#
6149e57d |
| 22-Nov-2021 |
Florian Hahn <flo@fhahn.com> |
[ThreadPool] Support returning futures with results.
This patch adjusts ThreadPool::async to return futures that wrap the result type of the passed in callable.
To do so, ThreadPool::asyncImpl firs
[ThreadPool] Support returning futures with results.
This patch adjusts ThreadPool::async to return futures that wrap the result type of the passed in callable.
To do so, ThreadPool::asyncImpl first creates a shared promise. The result of the promise is set in a new callable that first executes the task. The callable is added to the task queue.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D114183
show more ...
|
Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2 |
|
#
48c68a63 |
| 26-May-2021 |
Tim Northover <t.p.northover@gmail.com> |
Recommit: Support: add llvm::thread class that supports specifying stack size.
This adds a new llvm::thread class with the same interface as std::thread except there is an extra constructor that all
Recommit: Support: add llvm::thread class that supports specifying stack size.
This adds a new llvm::thread class with the same interface as std::thread except there is an extra constructor that allows us to set the new thread's stack size. On Darwin even the default size is boosted to 8MB to match the main thread.
It also switches all users of the older C-style `llvm_execute_on_thread` API family over to `llvm::thread` followed by either a `detach` or `join` call and removes the old API.
Moved definition of DefaultStackSize into the .cpp file to hopefully fix the build on some (GCC-6?) machines.
show more ...
|
#
2bf5e8d9 |
| 08-Jul-2021 |
Tim Northover <t.p.northover@gmail.com> |
Revert "Support: add llvm::thread class that supports specifying stack size."
It's causing build failures because DefaultStackSize isn't defined everywhere it should be and I need time to investigat
Revert "Support: add llvm::thread class that supports specifying stack size."
It's causing build failures because DefaultStackSize isn't defined everywhere it should be and I need time to investigate.
show more ...
|
#
727e1c9b |
| 26-May-2021 |
Tim Northover <t.p.northover@gmail.com> |
Support: add llvm::thread class that supports specifying stack size.
This adds a new llvm::thread class with the same interface as std::thread except there is an extra constructor that allows us to
Support: add llvm::thread class that supports specifying stack size.
This adds a new llvm::thread class with the same interface as std::thread except there is an extra constructor that allows us to set the new thread's stack size. On Darwin even the default size is boosted to 8MB to match the main thread.
It also switches all users of the older C-style `llvm_execute_on_thread` API family over to `llvm::thread` followed by either a `detach` or `join` call and removes the old API.
show more ...
|
#
6569cf2a |
| 23-Jun-2021 |
River Riddle <riddleriver@gmail.com> |
[mlir] Add a ThreadPool to MLIRContext and refactor MLIR threading usage
This revision refactors the usage of multithreaded utilities in MLIR to use a common thread pool within the MLIR context, in
[mlir] Add a ThreadPool to MLIRContext and refactor MLIR threading usage
This revision refactors the usage of multithreaded utilities in MLIR to use a common thread pool within the MLIR context, in addition to a new utility that makes writing multi-threaded code in MLIR less error prone. Using a unified thread pool brings about several advantages:
* Better thread usage and more control We currently use the static llvm threading utilities, which do not allow multiple levels of asynchronous scheduling (even if there are open threads). This is due to how the current TaskGroup structure works, which only allows one truly multithreaded instance at a time. By having our own ThreadPool we gain more control and flexibility over our job/thread scheduling, and in a followup can enable threading more parts of the compiler.
* The static nature of TaskGroup causes issues in certain configurations Due to the static nature of TaskGroup, there have been quite a few problems related to destruction that have caused several downstream projects to disable threading. See D104207 for discussion on some related fallout. By having a ThreadPool scoped to the context, we don't have to worry about destruction and can ensure that any additional MLIR thread usage ends when the context is destroyed.
Differential Revision: https://reviews.llvm.org/D104516
show more ...
|
Revision tags: llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1 |
|
#
6f230491 |
| 25-Apr-2020 |
Fangrui Song <maskray@google.com> |
[Support] Simplify and optimize ThreadPool
* Merge QueueLock and CompletionLock. * Avoid spurious CompletionCondition.notify_all() when ActiveThreads is greater than 0. * Use default member initiali
[Support] Simplify and optimize ThreadPool
* Merge QueueLock and CompletionLock. * Avoid spurious CompletionCondition.notify_all() when ActiveThreads is greater than 0. * Use default member initializers.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D78856
show more ...
|
Revision tags: llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4, llvmorg-10.0.0-rc3 |
|
#
8404aeb5 |
| 14-Feb-2020 |
Alexandre Ganea <alexandre.ganea@ubisoft.com> |
[Support] On Windows, ensure hardware_concurrency() extends to all CPU sockets and all NUMA groups
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, s
[Support] On Windows, ensure hardware_concurrency() extends to all CPU sockets and all NUMA groups
The goal of this patch is to maximize CPU utilization on multi-socket or high core count systems, so that parallel computations such as LLD/ThinLTO can use all hardware threads in the system. Before this patch, on Windows, a maximum of 64 hardware threads could be used at most, in some cases dispatched only on one CPU socket.
== Background == Windows doesn't have a flat cpu_set_t like Linux. Instead, it projects hardware CPUs (or NUMA nodes) to applications through a concept of "processor groups". A "processor" is the smallest unit of execution on a CPU, that is, an hyper-thread if SMT is active; a core otherwise. There's a limit of 32-bit processors on older 32-bit versions of Windows, which later was raised to 64-processors with 64-bit versions of Windows. This limit comes from the affinity mask, which historically is represented by the sizeof(void*). Consequently, the concept of "processor groups" was introduced for dealing with systems with more than 64 hyper-threads.
By default, the Windows OS assigns only one "processor group" to each starting application, in a round-robin manner. If the application wants to use more processors, it needs to programmatically enable it, by assigning threads to other "processor groups". This also means that affinity cannot cross "processor group" boundaries; one can only specify a "preferred" group on start-up, but the application is free to allocate more groups if it wants to.
This creates a peculiar situation, where newer CPUs like the AMD EPYC 7702P (64-cores, 128-hyperthreads) are projected by the OS as two (2) "processor groups". This means that by default, an application can only use half of the cores. This situation could only get worse in the years to come, as dies with more cores will appear on the market.
== The problem == The heavyweight_hardware_concurrency() API was introduced so that only *one hardware thread per core* was used. Once that API returns, that original intention is lost, only the number of threads is retained. Consider a situation, on Windows, where the system has 2 CPU sockets, 18 cores each, each core having 2 hyper-threads, for a total of 72 hyper-threads. Both heavyweight_hardware_concurrency() and hardware_concurrency() currently return 36, because on Windows they are simply wrappers over std::thread::hardware_concurrency() -- which can only return processors from the current "processor group".
== The changes in this patch == To solve this situation, we capture (and retain) the initial intention until the point of usage, through a new ThreadPoolStrategy class. The number of threads to use is deferred as late as possible, until the moment where the std::threads are created (ThreadPool in the case of ThinLTO).
When using hardware_concurrency(), setting ThreadCount to 0 now means to use all the possible hardware CPU (SMT) threads. Providing a ThreadCount above to the maximum number of threads will have no effect, the maximum will be used instead. The heavyweight_hardware_concurrency() is similar to hardware_concurrency(), except that only one thread per hardware *core* will be used.
When LLVM_ENABLE_THREADS is OFF, the threading APIs will always return 1, to ensure any caller loops will be exercised at least once.
Differential Revision: https://reviews.llvm.org/D71775
show more ...
|
Revision tags: llvmorg-10.0.0-rc2, llvmorg-10.0.0-rc1, llvmorg-11-init, llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1, llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5, llvmorg-9.0.0-rc4, llvmorg-9.0.0-rc3, llvmorg-9.0.0-rc2, llvmorg-9.0.0-rc1, llvmorg-10-init, llvmorg-8.0.1, llvmorg-8.0.1-rc4, llvmorg-8.0.1-rc3, llvmorg-8.0.1-rc2, llvmorg-8.0.1-rc1, llvmorg-8.0.0, llvmorg-8.0.0-rc5, llvmorg-8.0.0-rc4, llvmorg-8.0.0-rc3, llvmorg-7.1.0, llvmorg-7.1.0-rc1, llvmorg-8.0.0-rc2, llvmorg-8.0.0-rc1 |
|
#
2946cd70 |
| 19-Jan-2019 |
Chandler Carruth <chandlerc@gmail.com> |
Update the file headers across all of the LLVM projects in the monorepo to reflect the new license.
We understand that people may be surprised that we're moving the header entirely to discuss the ne
Update the file headers across all of the LLVM projects in the monorepo to reflect the new license.
We understand that people may be surprised that we're moving the header entirely to discuss the new license. We checked this carefully with the Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM project under our new license, so you will see that the license headers include that license only. Some of our contributors have contributed code under our old license, and accordingly, we have retained a copy of our old license notice in the top-level files in each project and repository.
llvm-svn: 351636
show more ...
|