Revision tags: llvmorg-21-init |
|
#
8e702735 |
| 24-Jan-2025 |
Jeremy Morse <jeremy.morse@sony.com> |
[NFC][DebugInfo] Use iterator moveBefore at many call-sites (#123583)
As part of the "RemoveDIs" project, BasicBlock::iterator now carries a
debug-info bit that's needed when getFirstNonPHI and sim
[NFC][DebugInfo] Use iterator moveBefore at many call-sites (#123583)
As part of the "RemoveDIs" project, BasicBlock::iterator now carries a
debug-info bit that's needed when getFirstNonPHI and similar feed into
instruction insertion positions. Call-sites where that's necessary were
updated a year ago; but to ensure some type safety however, we'd like to
have all calls to moveBefore use iterators.
This patch adds a (guaranteed dereferenceable) iterator-taking
moveBefore, and changes a bunch of call-sites where it's obviously safe
to change to use it by just calling getIterator() on an instruction
pointer. A follow-up patch will contain less-obviously-safe changes.
We'll eventually deprecate and remove the instruction-pointer
insertBefore, but not before adding concise documentation of what
considerations are needed (very few).
show more ...
|
Revision tags: llvmorg-19.1.7 |
|
#
9184c428 |
| 08-Jan-2025 |
Vyacheslav Klochkov <vyacheslav.n.klochkov@intel.com> |
[LoadStoreVectorizer] Postprocess and merge equivalence classes (#121861)
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const;
T
[LoadStoreVectorizer] Postprocess and merge equivalence classes (#121861)
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const;
The method is called at the end of
Vectorizer::collectEquivalenceClasses() and is needed to merge
equivalence classes that differ only by their underlying objects (UO1
and UO2), where UO1 is 1-level-indirection underlying base for UO2. This
situation arises due to the limited lookup depth used during the search
of underlying bases with llvm::getUnderlyingObject(ptr).
Using any fixed lookup depth can result into creation of multiple
equivalence classes that only differ by 1-level indirection bases.
The new approach merges equivalence classes if they have adjacent bases
(1-level indirection). If a series of equivalence classes form ladder
formed of 1-step/level indirections, they are all merged into a single
equivalence class. This provides more opportunities for the load-store
vectorizer to generate better vectors.
---------
Signed-off-by: Klochkov, Vyacheslav N <vyacheslav.n.klochkov@intel.com>
show more ...
|
Revision tags: llvmorg-19.1.6 |
|
#
04313b86 |
| 12-Dec-2024 |
Michal Paszkowski <michal@paszkowski.org> |
Revert "[LoadStoreVectorizer] Postprocess and merge equivalence classes" (#119657)
Reverts llvm/llvm-project#114501, due to the following failure:
https://lab.llvm.org/buildbot/#/builders/55/builds
Revert "[LoadStoreVectorizer] Postprocess and merge equivalence classes" (#119657)
Reverts llvm/llvm-project#114501, due to the following failure:
https://lab.llvm.org/buildbot/#/builders/55/builds/4171
show more ...
|
#
fd2f8d48 |
| 12-Dec-2024 |
Vyacheslav Klochkov <vyacheslav.n.klochkov@intel.com> |
[LoadStoreVectorizer] Postprocess and merge equivalence classes (#114501)
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const
The
[LoadStoreVectorizer] Postprocess and merge equivalence classes (#114501)
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const
The method is called at the end of
Vectorizer::collectEquivalenceClasses() and is needed to merge
equivalence classes that differ only by their underlying objects (UO1
and UO2), where UO1 is 1-level-indirection underlying base for UO2. This
situation arises due to the limited lookup depth used during the search
of underlying bases with llvm::getUnderlyingObject(ptr).
Using any fixed lookup depth can result into creation of multiple
equivalence classes that only differ by 1-level indirection bases.
The new approach merges equivalence classes if they have adjacent bases
(1-level indirection). If a series of equivalence classes form ladder
formed of 1-step/level indirections, they are all merged into a single
equivalence class. This provides more opportunities for the load-store
vectorizer to generate better vectors.
---------
Signed-off-by: Klochkov, Vyacheslav N <vyacheslav.n.klochkov@intel.com>
show more ...
|
Revision tags: llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4 |
|
#
9671ed1a |
| 27-Aug-2024 |
Danial Klimkin <dklimkin@google.com> |
Revert "LSV: forbid load-cycles when vectorizing; fix bug (#104815)" (#106245)
This reverts commit c46b41aaa6eaa787f808738d14c61a2f8b6d839f.
Multiple tests time out, either due to performance hit
Revert "LSV: forbid load-cycles when vectorizing; fix bug (#104815)" (#106245)
This reverts commit c46b41aaa6eaa787f808738d14c61a2f8b6d839f.
Multiple tests time out, either due to performance hit (see comment) or
a cycle.
show more ...
|
#
c46b41aa |
| 22-Aug-2024 |
Ramkumar Ramachandra <ramkumar.ramachandra@codasip.com> |
LSV: forbid load-cycles when vectorizing; fix bug (#104815)
Forbid load-load cycles which would crash LoadStoreVectorizer when
reordering instructions.
Fixes #37865.
|
Revision tags: llvmorg-19.1.0-rc3 |
|
#
92a8ec7a |
| 19-Aug-2024 |
Ramkumar Ramachandra <ramkumar.ramachandra@codasip.com> |
LSV: fix style after cursory reading (NFC) (#104793)
|
#
199d6f2c |
| 09-Aug-2024 |
Ramkumar Ramachandra <ramkumar.ramachandra@codasip.com> |
LSV: document hang reported in #37865 (#102479)
LoadStoreVectorizer hangs on certain examples, when its reorder function
goes into a cycle. Detect this cycle and explicitly forbid it, using an
ass
LSV: document hang reported in #37865 (#102479)
LoadStoreVectorizer hangs on certain examples, when its reorder function
goes into a cycle. Detect this cycle and explicitly forbid it, using an
assert, and document the resulting crash in a test-case under AArch64.
show more ...
|
Revision tags: llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init |
|
#
9df71d76 |
| 28-Jun-2024 |
Nikita Popov <npopov@redhat.com> |
[IR] Add getDataLayout() helpers to Function and GlobalValue (#96919)
Similar to https://github.com/llvm/llvm-project/pull/96902, this adds
`getDataLayout()` helpers to Function and GlobalValue, re
[IR] Add getDataLayout() helpers to Function and GlobalValue (#96919)
Similar to https://github.com/llvm/llvm-project/pull/96902, this adds
`getDataLayout()` helpers to Function and GlobalValue, replacing the
current `getParent()->getDataLayout()` pattern.
show more ...
|
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2 |
|
#
fab2bb8b |
| 11-Mar-2024 |
Justin Lebar <justin.lebar@gmail.com> |
Add llvm::min/max_element and use it in llvm/ and mlir/ directories. (#84678)
For some reason this was missing from STLExtras.
|
Revision tags: llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init |
|
#
7954c571 |
| 04-Jan-2024 |
Jannik Silvanus <37809848+jasilvanus@users.noreply.github.com> |
[IR] Fix GEP offset computations for vector GEPs (#75448)
Vectors are always bit-packed and don't respect the elements' alignment
requirements. This is different from arrays. This means offsets of
[IR] Fix GEP offset computations for vector GEPs (#75448)
Vectors are always bit-packed and don't respect the elements' alignment
requirements. This is different from arrays. This means offsets of
vector GEPs need to be computed differently than offsets of array GEPs.
This PR fixes many places that rely on an incorrect pattern
that always relies on `DL.getTypeAllocSize(GTI.getIndexedType())`.
We replace these by usages of `GTI.getSequentialElementStride(DL)`,
which is a new helper function added in this PR.
This changes behavior for GEPs into vectors with element types for which
the (bit) size and alloc size is different. This includes two cases:
* Types with a bit size that is not a multiple of a byte, e.g. i1.
GEPs into such vectors are questionable to begin with, as some elements
are not even addressable.
* Overaligned types, e.g. i16 with 32-bit alignment.
Existing tests are unaffected, but a miscompilation of a new test is fixed.
---------
Co-authored-by: Nikita Popov <github@npopov.com>
show more ...
|
#
a1642936 |
| 10-Dec-2023 |
Kazu Hirata <kazu@google.com> |
[Transforms] Remove unnecessary includes (NFC)
|
Revision tags: llvmorg-17.0.6, llvmorg-17.0.5 |
|
#
2400c54c |
| 06-Nov-2023 |
Tom Stellard <tstellar@redhat.com> |
[Vectorize] Remove Transforms/Vectorize.h (#71294)
The only thing in this file is a declaration for
createLoadStoreVectorizerPass(), and this function is already declared
in LoadStoreVectorizer.h.
|
Revision tags: llvmorg-17.0.4, llvmorg-17.0.3 |
|
#
d4300154 |
| 16-Oct-2023 |
Nikita Popov <npopov@redhat.com> |
Revert "[ValueTracking] Remove by-ref computeKnownBits() overloads (NFC)"
This reverts commit b5743d4798b250506965e07ebab806a3c2d767cc.
This causes some minor compile-time impact. Revert for now, b
Revert "[ValueTracking] Remove by-ref computeKnownBits() overloads (NFC)"
This reverts commit b5743d4798b250506965e07ebab806a3c2d767cc.
This causes some minor compile-time impact. Revert for now, better to do the change more gradually.
show more ...
|
#
b5743d47 |
| 16-Oct-2023 |
Nikita Popov <npopov@redhat.com> |
[ValueTracking] Remove by-ref computeKnownBits() overloads (NFC)
Remove the old overloads that accept KnownBits by reference, in favor of those that return it by value.
|
Revision tags: llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2 |
|
#
408cc944 |
| 31-Jul-2023 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
[LV][LSV][SLP] Drop some typed pointer bitcasts
Differential Revision: https://reviews.llvm.org/D156736
|
Revision tags: llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6 |
|
#
d2223221 |
| 09-Jun-2023 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
[LoadStoreVectorizer] Optimize check for IsAllocaAccess. NFC
Swap order for checking address space and the strip pointer cast when analyzing if a load/store is accessing an alloca. This to make sure
[LoadStoreVectorizer] Optimize check for IsAllocaAccess. NFC
Swap order for checking address space and the strip pointer cast when analyzing if a load/store is accessing an alloca. This to make sure we do the cheaper check first.
This is done as a follow up to D152386.
show more ...
|
#
263bc7f9 |
| 07-Jun-2023 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
[LoadStoreVectorizer] Only upgrade align for alloca
In commit 2be0abb7fe72ed453 (D149893) the load store vectorized was reimplemented. One thing that can happen with the new LSV is that it can incre
[LoadStoreVectorizer] Only upgrade align for alloca
In commit 2be0abb7fe72ed453 (D149893) the load store vectorized was reimplemented. One thing that can happen with the new LSV is that it can increase the align of alloca and global objects. However, the code comments indicate that the intention only was to increase alignment of alloca. Now we will use stripPointerCasts to analyse if the load/store really is accessing an alloca (same as getOrEnforceKnownAlignment is using). And then we only try to change the align if we find an alloca instruction. This way the code will match better with code comments, and we won't change alignment of non-stack variables to use the "StackAdjustedAlignment".
Differential Revision: https://reviews.llvm.org/D152386
show more ...
|
Revision tags: llvmorg-16.0.5 |
|
#
e7acd8bd |
| 30-May-2023 |
Krzysztof Drewniak <Krzysztof.Drewniak@amd.com> |
[LoadStoreVectorizer] Fix index width != pointer width case
Fixes https://github.com/llvm/llvm-project/issues/62856
Reviewed By: jlebar
Differential Revision: https://reviews.llvm.org/D151754
|
#
420cf692 |
| 29-May-2023 |
Justin Lebar <justin.lebar@gmail.com> |
[LSV] Return same bitwidth from getConstantOffset.
Previously, getConstantOffset could return an APInt with a different bitwidth than the input pointers. For example, we might be loading an opaque
[LSV] Return same bitwidth from getConstantOffset.
Previously, getConstantOffset could return an APInt with a different bitwidth than the input pointers. For example, we might be loading an opaque 64-bit pointer, but stripAndAccumulateInBoundsConstantOffsets might give a 32-bit offset.
This was OK in most cases because in gatherChains, we casted the APInt back to the original ASPtrBits.
But it was not OK when considering selects. We'd call getConstantOffset twice and compare the resulting APInt's, which might not have the same bit width.
This fixes that. Now getConstantOffset always returns offsets with the correct width, so we don't need the hack of casting it in gatherChains, and it works correctly when we're handling selects.
Differential Revision: https://reviews.llvm.org/D151640
show more ...
|
#
f225471c |
| 28-May-2023 |
Justin Lebar <justin.lebar@gmail.com> |
[LSV] Fix the ContextInst for computeKnownBits.
Previously we used the later of GEPA or GEPB. This is hacky because really we should be using the later of the two load/store instructions being cons
[LSV] Fix the ContextInst for computeKnownBits.
Previously we used the later of GEPA or GEPB. This is hacky because really we should be using the later of the two load/store instructions being considered. But also it's flat-out incorrect, because GEPA and GEPB might be in different BBs, in which case we cannot ask which one comes last (assertion failure, https://reviews.llvm.org/D149893#4378332).
Fixed, now we use the correct context instruction.
Differential Revision: https://reviews.llvm.org/D151630
show more ...
|
#
b1b04ed9 |
| 27-May-2023 |
Kazu Hirata <kazu@google.com> |
[Vectorize] Fix warnings
This patch fixes:
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp:140:20: error: unused function 'operator<<' [-Werror,-Wunused-function]
llvm/lib/Transforms/V
[Vectorize] Fix warnings
This patch fixes:
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp:140:20: error: unused function 'operator<<' [-Werror,-Wunused-function]
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp:176:6: error: unused function 'dumpChain' [-Werror,-Wunused-function]
show more ...
|
#
0508ac32 |
| 27-May-2023 |
Kazu Hirata <kazu@google.com> |
[Vectorize] Fix a warning
This patch fixes:
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp:1429:23: error: comparison of integers of different signs: 'int' and 'const size_t' (aka 'con
[Vectorize] Fix a warning
This patch fixes:
llvm/lib/Transforms/Vectorize/LoadStoreVectorizer.cpp:1429:23: error: comparison of integers of different signs: 'int' and 'const size_t' (aka 'const unsigned long') [-Werror,-Wsign-compare]
show more ...
|
#
8d57b00f |
| 26-May-2023 |
Justin Lebar <justin.lebar@gmail.com> |
Fix -Wsign-compare from D149893.
|
Revision tags: llvmorg-16.0.4 |
|
#
2be0abb7 |
| 04-May-2023 |
Justin Lebar <justin.lebar@gmail.com> |
Rewrite load-store-vectorizer.
The motivation for this change is a workload generated by the XLA compiler targeting nvidia GPUs.
This kernel has a few hundred i8 loads and stores. Merging is criti
Rewrite load-store-vectorizer.
The motivation for this change is a workload generated by the XLA compiler targeting nvidia GPUs.
This kernel has a few hundred i8 loads and stores. Merging is critical for performance.
The current LSV doesn't merge these well because it only considers instructions within a block of 64 loads+stores. This limit is necessary to contain the O(n^2) behavior of the pass. I'm hesitant to increase the limit, because this pass is already one of the slowest parts of compiling an XLA program.
So we rewrite basically the whole thing to use a new algorithm. Before, we compared every load/store to every other to see if they're consecutive. The insight (from tra@) is that this is redundant. If we know the offset from PtrA to PtrB, then we don't need to compare PtrC to both of them in order to tell whether C may be adjacent to A or B.
So that's what we do. When scanning a basic block, we maintain a list of chains, where we know the offset from every element in the chain to the first element in the chain. Each instruction gets compared only to the leaders of all the chains.
In the worst case, this is still O(n^2), because all chains might be of length 1. To prevent compile time blowup, we only consider the 64 most recently used chains. Thus we do no more comparisons than before, but we have the potential to make much longer chains.
This rewrite affects many tests. The changes to tests fall into two categories.
1. The old code had what appears to be a bug when deciding whether a misaligned vectorized load is fast. Suppose TTI reports that load <i32 x 4> align 4 has relative speed 1, and suppose that load i32 align 4 has relative speed 32.
The intent of the code seems to be that we prefer the scalar load, because it's faster. But the old code would choose the vectorized load. accessIsMisaligned would set RelativeSpeed to 0 for the scalar load (and not even call into TTI to get the relative speed), because the scalar load is aligned.
After this patch, we will prefer the scalar load if it's faster.
2. This patch changes the logic for how we vectorize. Usually this results in vectorizing more.
Explanation of changes to tests:
- AMDGPU/adjust-alloca-alignment.ll: #1 - AMDGPU/flat_atomic.ll: #2, we vectorize more. - AMDGPU/int_sideeffect.ll: #2, there are two possible locations for the call to @foo, and the pass is brittle to this. Before, we'd vectorize in case 1 and not case 2. Now we vectorize in case 2 and not case 1. So we just move the call. - AMDGPU/adjust-alloca-alignment.ll: #2, we vectorize more - AMDGPU/insertion-point.ll: #2 we vectorize more - AMDGPU/merge-stores-private.ll: #1 (undoes changes from git rev 86f9117d476, which appear to have hit the bug from #1) - AMDGPU/multiple_tails.ll: #1 - AMDGPU/vect-ptr-ptr-size-mismatch.ll: Fix alignment (I think related to #1 above). - AMDGPU CodeGen: I have difficulty commenting on these changes, but many of them look like #2, we vectorize more. - NVPTX/4x2xhalf.ll: Fix alignment (I think related to #1 above). - NVPTX/vectorize_i8.ll: We don't generate <3 x i8> vectors on NVPTX because they're not legal (and eventually get split) - X86/correct-order.ll: #2, we vectorize more, probably because of changes to the chain-splitting logic. - X86/subchain-interleaved.ll: #2, we vectorize more - X86/vector-scalar.ll: #2, we can now vectorize scalar float + <1 x float> - X86/vectorize-i8-nested-add-inseltpoison.ll: Deleted the nuw test because it was nonsensical. It was doing `add nuw %v0, -1`, but this is equivalent to `add nuw %v0, 0xffff'ffff`, which is equivalent to asserting that %v0 == 0. - X86/vectorize-i8-nested-add.ll: Same as nested-add-inseltpoison.ll
Differential Revision: https://reviews.llvm.org/D149893
show more ...
|