Revision tags: llvmorg-21-init, llvmorg-19.1.7 |
|
#
17c8c1c5 |
| 07-Jan-2025 |
bcahoon <59846893+bcahoon@users.noreply.github.com> |
[AMDGPU] Do not fold into v_accvpr_mov/write/read (#120475)
In SIFoldOperands, leave copies for moving between agpr and vgpr
registers. The register coalescer is able to handle the copies
more eff
[AMDGPU] Do not fold into v_accvpr_mov/write/read (#120475)
In SIFoldOperands, leave copies for moving between agpr and vgpr
registers. The register coalescer is able to handle the copies
more efficiently than v_accvgpr_mov, v_accvgpr_write, and
v_accvgpr_read. Otherwise, the compiler generates unneccesary
instructions such as v_accvgpr_mov a0, a0.
show more ...
|
#
ce831a23 |
| 06-Jan-2025 |
Brox Chen <guochen2@amd.com> |
[AMDGPU][True16][MC] true16 for v_fma_f16 (#119477)
Support true16 format for v_fma_f16 in MC.
Since we are replacing v_fma_f16 to v_fma_f16_t16/v_fma_f16_fake16 in
Post-GFX11, have to update th
[AMDGPU][True16][MC] true16 for v_fma_f16 (#119477)
Support true16 format for v_fma_f16 in MC.
Since we are replacing v_fma_f16 to v_fma_f16_t16/v_fma_f16_fake16 in
Post-GFX11, have to update the CodeGen pattern for v_fma_f16_fake16 to
get CodeGen test passing. There is no pattern modified/created, but just
replacing the v_fma_f16 with fake16 format.
show more ...
|
Revision tags: llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4 |
|
#
5911fbb3 |
| 13-Nov-2024 |
Matt Arsenault <Matthew.Arsenault@amd.com> |
AMDGPU: Do not fold copy to physreg from operation on frame index (#115977)
|
#
4fb43c47 |
| 08-Nov-2024 |
Matt Arsenault <Matthew.Arsenault@amd.com> |
AMDGPU: Fold more scalar operations on frame index to VALU (#115059)
Further extend workaround for the lack of proper regbankselect for frame indexes.
|
#
aa794128 |
| 06-Nov-2024 |
Matt Arsenault <Matthew.Arsenault@amd.com> |
AMDGPU: Fold copy of scalar add of frame index (#115058)
This is a pre-optimization to avoid a regression in a future commit. Currently we almost always emit frame index with a v_mov_b32 and use vec
AMDGPU: Fold copy of scalar add of frame index (#115058)
This is a pre-optimization to avoid a regression in a future commit. Currently we almost always emit frame index with a v_mov_b32 and use vector adds for the pointer operations. We need to consider the users of the frame index (or rather, the transitive users of derived pointer operations) to know whether the value will be used in a vector or scalar context. This saves an sgpr->vgpr copy.
This optimization could be more general for any opcode that's trivially convertible from a scalar to vector form (although this is a workaround for a proper regbankselect).
show more ...
|
#
e8644e3b |
| 05-Nov-2024 |
Brox Chen <guochen2@amd.com> |
[AMDGPU][True16][MC] VOP2 update instructions with fake16 format (#114436)
Some old "t16" VOP2 instructions are actually in fake16 format. Correct
and update test file
|
#
a156362e |
| 29-Oct-2024 |
Jay Foad <jay.foad@amd.com> |
[AMDGPU] Fix machine verification failure after SIFoldOperandsImpl::tryFoldOMod (#113544)
Fixes #54201
|
Revision tags: llvmorg-19.1.3 |
|
#
ef91cd3f |
| 19-Oct-2024 |
Matt Arsenault <Matthew.Arsenault@amd.com> |
AMDGPU: Handle folding frame indexes into add with immediate (#110738)
|
Revision tags: llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4 |
|
#
2adc94cd |
| 29-Aug-2024 |
Akshat Oke <76596238+Akshat-Oke@users.noreply.github.com> |
AMDGPU/NewPM: Port SIFoldOperands to new pass manager (#105801)
|
Revision tags: llvmorg-19.1.0-rc3 |
|
#
ae059a1f |
| 08-Aug-2024 |
Brox Chen <broxigarchen@outlook.com> |
[AMDGPU][True16][CodeGen] support v_mov_b16 and v_swap_b16 in true16 format (#102198)
support v_swap_b16 in true16 format.
update tableGen pattern and folding for v_mov_b16.
---------
Co-auth
[AMDGPU][True16][CodeGen] support v_mov_b16 and v_swap_b16 in true16 format (#102198)
support v_swap_b16 in true16 format.
update tableGen pattern and folding for v_mov_b16.
---------
Co-authored-by: guochen2 <guochen2@amd.com>
show more ...
|
Revision tags: llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1 |
|
#
817cd726 |
| 25-Jul-2024 |
Mirko Brkušanin <Mirko.Brkusanin@amd.com> |
[AMDGPU] Fix folding clamp into pseudo scalar instructions (#100568)
Clamp is canonically a v_max* instruction with a VGPR dst. Folding clamp
into a pseudo scalar instruction can cause issues due t
[AMDGPU] Fix folding clamp into pseudo scalar instructions (#100568)
Clamp is canonically a v_max* instruction with a VGPR dst. Folding clamp
into a pseudo scalar instruction can cause issues due to a change in
regbank. We fix this with a copy.
show more ...
|
Revision tags: llvmorg-20-init |
|
#
25f4bd88 |
| 19-Jul-2024 |
Changpeng Fang <changpeng.fang@amd.com> |
AMDGPU: Clear kill flags after FoldZeroHighBits (#99582)
After folding, all uses of the result register are going to be replaced
by the operand register. The kill flags on the uses of the result an
AMDGPU: Clear kill flags after FoldZeroHighBits (#99582)
After folding, all uses of the result register are going to be replaced
by the operand register. The kill flags on the uses of the result and
operand registers are no longer valid after the replacement, and need to
be cleared.
The only exception is, however, if the kill flag is set for the operand
register, we are sure the last use of the result register is the new
last use of the operand register, and thus we are safe to keep the kill
flags.
show more ...
|
#
c7309dad |
| 17-Jul-2024 |
Jay Foad <jay.foad@amd.com> |
[AMDGPU] Use range-based for loops. NFC. (#99047)
|
#
d8b63b68 |
| 18-Jun-2024 |
Matt Arsenault <Matthew.Arsenault@amd.com> |
AMDGPU: Don't fold clamp/omod modifiers without nofpexcept (#95950)
|
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6 |
|
#
0b50d095 |
| 07-May-2024 |
Shilei Tian <i@tianshilei.me> |
[AMDGPU] Don't optimize agpr phis if the operand doesn't have subreg use (#91267)
If the operand doesn't have any subreg use, the optimization could
potentially
generate `V_ACCVGPR_READ_B32_e64` w
[AMDGPU] Don't optimize agpr phis if the operand doesn't have subreg use (#91267)
If the operand doesn't have any subreg use, the optimization could
potentially
generate `V_ACCVGPR_READ_B32_e64` with wrong register class. The
following example demonstrates the issue.
Input MIR:
```
bb.0:
%0:sgpr_32 = S_MOV_B32 0
%1:sgpr_128 = REG_SEQUENCE %0:sgpr_32, %subreg.sub0, %0:sgpr_32, %subreg.sub1, %0:sgpr_32, %subreg.sub2, %0:sgpr_32, %subreg.sub3
%2:vreg_128 = COPY %1:sgpr_128
%3:areg_128 = COPY %2:vreg_128, implicit $exec
bb.1:
%4:areg_128 = PHI %3:areg_128, %bb.0, %6:areg_128, %bb.1
%5:areg_128 = PHI %3:areg_128, %bb.0, %7:areg_128, %bb.1
...
```
Output of current implementation:
```
bb.0:
%0:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
%1:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
%2:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
%3:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
%4:areg_128 = REG_SEQUENCE %0:agpr_32, %subreg.sub0, %1:agpr_32, %subreg.sub1, %2:agpr_32, %subreg.sub2, %3:agpr_32, %subreg.sub3
%5:vreg_128 = V_ACCVGPR_READ_B32_e64 %4:areg_128, implicit $exec
%6:areg_128 = COPY %46:vreg_128
bb.1:
%7:areg_128 = PHI %6:areg_128, %bb.0, %9:areg_128, %bb.1
%8:areg_128 = PHI %6:areg_128, %bb.0, %10:areg_128, %bb.1
...
```
The problem is the generated `V_ACCVGPR_READ_B32_e64` instruction.
Apparently the operand `%4:areg_128` is not valid for this.
In this patch, we don't count the none-subreg use because
`V_ACCVGPR_READ_B32_e64` can't handle none-32-bit operand.
Fixes: SWDEV-459556
show more ...
|
Revision tags: llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1 |
|
#
24846809 |
| 07-Mar-2024 |
Martin Wehking <martin.wehking@codeplay.com> |
Add non-null check before accessing pointer (#83459)
Add a check if RC is not null to ensure that a consecutive access is
safe.
A static analyzer flagged this issue since hasVectorRegisters
pot
Add non-null check before accessing pointer (#83459)
Add a check if RC is not null to ensure that a consecutive access is
safe.
A static analyzer flagged this issue since hasVectorRegisters
potentially dereferences RC.
show more ...
|
Revision tags: llvmorg-18.1.0, llvmorg-18.1.0-rc4 |
|
#
04db60d1 |
| 27-Feb-2024 |
choikwa <5455710+choikwa@users.noreply.github.com> |
[AMDGPU] Prevent hang in SIFoldOperands by caching uses (#82099)
foldOperands() for REG_SEQUENCE has recursion that can trigger an infinite loop
as the method can modify the operand order, which me
[AMDGPU] Prevent hang in SIFoldOperands by caching uses (#82099)
foldOperands() for REG_SEQUENCE has recursion that can trigger an infinite loop
as the method can modify the operand order, which messes up the range-based
for loop. This patch fixes the issue by caching the uses for processing beforehand,
and then iterating over the cache rather using the instruction iterator.
show more ...
|
Revision tags: llvmorg-18.1.0-rc3 |
|
#
39cab1a0 |
| 20-Feb-2024 |
Stanislav Mekhanoshin <rampitec@users.noreply.github.com> |
[AMDGPU] Add v2bf16 for opsel immediate folding (#82435)
This was previously enabled since v2bf16 was represented by v2f16. As of
now it is NFC since we only have dot instructions which could use i
[AMDGPU] Add v2bf16 for opsel immediate folding (#82435)
This was previously enabled since v2bf16 was represented by v2f16. As of
now it is NFC since we only have dot instructions which could use it,
but currently folding is guarded by the hasDOTOpSelHazard().
show more ...
|
#
7d19dc50 |
| 08-Feb-2024 |
Ivan Kosarev <ivan.kosarev@amd.com> |
[AMDGPU][True16] Support VOP3 source DPP operands. (#80892)
|
Revision tags: llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1 |
|
#
7fdf608c |
| 24-Jan-2024 |
Mirko Brkušanin <Mirko.Brkusanin@amd.com> |
[AMDGPU] Add GFX12 WMMA and SWMMAC instructions (#77795)
Co-authored-by: Petar Avramovic <Petar.Avramovic@amd.com>
Co-authored-by: Piotr Sobczak <piotr.sobczak@amd.com>
|
Revision tags: llvmorg-19-init |
|
#
0a3a0ea5 |
| 18-Jan-2024 |
Jay Foad <jay.foad@amd.com> |
[AMDGPU] Update uses of new VOP2 pseudos for GFX12 (#78155)
New pseudos were added for instructions that were natively VOP3 on
GFX11: V_ADD_F64_pseudo, V_MUL_F64_pseudo, V_MIN_NUM_F64, V_MAX_NUM_F6
[AMDGPU] Update uses of new VOP2 pseudos for GFX12 (#78155)
New pseudos were added for instructions that were natively VOP3 on
GFX11: V_ADD_F64_pseudo, V_MUL_F64_pseudo, V_MIN_NUM_F64, V_MAX_NUM_F64,
V_LSHLREV_B64_pseudo
---------
Co-authored-by: Mirko Brkusanin <Mirko.Brkusanin@amd.com>
show more ...
|
#
49b49204 |
| 03-Jan-2024 |
Nicolai Hähnle <nicolai.haehnle@amd.com> |
AMDGPU: Fix packed 16-bit inline constants (#76522)
Consistently treat packed 16-bit operands as 32-bit values, because
that's really what they are. The attempt to treat them differently was
ultim
AMDGPU: Fix packed 16-bit inline constants (#76522)
Consistently treat packed 16-bit operands as 32-bit values, because
that's really what they are. The attempt to treat them differently was
ultimately incorrect and lead to miscompiles, e.g. when using non-splat
constants such as (1, 0) as operands.
Recognize 32-bit float constants for i/u16 instructions. This is a bit
odd conceptually, but it matches HW behavior and SP3.
Remove isFoldableLiteralV216; there was too much magic in the dependency
between it and its use in SIFoldOperands. Instead, we now simply rely on
checking whether a constant is an inline constant, and trying a bunch of
permutations of the low and high halves. This is more obviously correct
and leads to some new cases where inline constants are used as shown by
tests.
Move the logic for switching packed add vs. sub into SIFoldOperands.
This has two benefits: all logic that optimizes for inline constants in
packed math is now in one place; and it applies to both SelectionDAG and
GISel paths.
Disable the use of opsel with v_dot* instructions on gfx11. They are
documented to ignore opsel on src0 and src1. It may be interesting to
re-enable to use of opsel on src2 as a future optimization.
A similar "proper" fix of what inline constants mean could potentially
be applied to unpacked 16-bit ops. However, it's less clear what the
benefit would be, and there are surely places where we'd have to
carefully audit whether values are properly sign- or zero-extended. It
is best to keep such a change separate.
Fixes: Corruption in FSR 2.0 (latent bug exposed by an LLPC change)
show more ...
|
#
87d884b5 |
| 28-Nov-2023 |
Stanislav Mekhanoshin <rampitec@users.noreply.github.com> |
[AMDGPU] Fix folding of v2i16/v2f16 splat imms (#72709)
We can use inline constants with packed 16-bit operands, but these
should use op_sel. Currently splat of inlinable constants is considered
l
[AMDGPU] Fix folding of v2i16/v2f16 splat imms (#72709)
We can use inline constants with packed 16-bit operands, but these
should use op_sel. Currently splat of inlinable constants is considered
legal, which is not really true if we fail to fold it with op_sel and
drop the high half. It may be legal as a literal but not as inline
constant, but then usual literal checks must be performed.
This patch makes these splat literals illegal but adds additional logic
to the operand folding to keep current folds. This logic is somewhat
heavy though.
This has fixed constant bus violation in the fdot2 test.
show more ...
|
Revision tags: llvmorg-17.0.6 |
|
#
82d22a1b |
| 28-Nov-2023 |
Stanislav Mekhanoshin <rampitec@users.noreply.github.com> |
[AMDGPU] Fixed folding of inline imm into dot w/o opsel (#73589)
A splat packed constant can be folded as an inline immediate but it
shall use opsel. On gfx940 this code path can be skipped due to
[AMDGPU] Fixed folding of inline imm into dot w/o opsel (#73589)
A splat packed constant can be folded as an inline immediate but it
shall use opsel. On gfx940 this code path can be skipped due to HW bug
workaround and then it may be folded w/o opsel which is a bug. Fixed.
show more ...
|
Revision tags: llvmorg-17.0.5 |
|
#
a4196666 |
| 13-Nov-2023 |
Jay Foad <jay.foad@amd.com> |
[AMDGPU] Revert "Preliminary patch for divergence driven instruction selection. Operands Folding 1." (#71710)
This reverts commit 201f892b3b597f24287ab6a712a286e25a45a7d9.
|