|
Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4 |
|
| #
38fffa63 |
| 06-Nov-2024 |
Paul Walker <paul.walker@arm.com> |
[LLVM][IR] Use splat syntax when printing Constant[Data]Vector. (#112548)
|
|
Revision tags: llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4 |
|
| #
a1058776 |
| 21-Aug-2024 |
Nikita Popov <npopov@redhat.com> |
[InstCombine] Remove some of the complexity-based canonicalization (#91185)
The idea behind this canonicalization is that it allows us to handle less
patterns, because we know that some will be can
[InstCombine] Remove some of the complexity-based canonicalization (#91185)
The idea behind this canonicalization is that it allows us to handle less
patterns, because we know that some will be canonicalized away. This is
indeed very useful to e.g. know that constants are always on the right.
However, this is only useful if the canonicalization is actually
reliable. This is the case for constants, but not for arguments: Moving
these to the right makes it look like the "more complex" expression is
guaranteed to be on the left, but this is not actually the case in
practice. It fails as soon as you replace the argument with another
instruction.
The end result is that it looks like things correctly work in tests,
while they actually don't. We use the "thwart complexity-based
canonicalization" trick to handle this in tests, but it's often a
challenge for new contributors to get this right, and based on the
regressions this PR originally exposed, we clearly don't get this right
in many cases.
For this reason, I think that it's better to remove this complexity
canonicalization. It will make it much easier to write tests for
commuted cases and make sure that they are handled.
show more ...
|
|
Revision tags: llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5 |
|
| #
1baa3850 |
| 18-Apr-2024 |
Nikita Popov <npopov@redhat.com> |
[IR][PatternMatch] Only accept poison in getSplatValue() (#89159)
In #88217 a large set of matchers was changed to only accept poison
values in splats, but not undef values. This is because we now
[IR][PatternMatch] Only accept poison in getSplatValue() (#89159)
In #88217 a large set of matchers was changed to only accept poison
values in splats, but not undef values. This is because we now use
poison for non-demanded vector elements, and allowing undef can cause
correctness issues.
This patch covers the remaining matchers by changing the AllowUndef
parameter of getSplatValue() to AllowPoison instead. We also carry out
corresponding renames in matchers.
As a followup, we may want to change the default for things like m_APInt
to m_APIntAllowPoison (as this is much less risky when only allowing
poison), but this change doesn't do that.
There is one caveat here: We have a single place
(X86FixupVectorConstants) which does require handling of vector splats
with undefs. This is because this works on backend constant pool
entries, which currently still use undef instead of poison for
non-demanded elements (because SDAG as a whole does not have an explicit
poison representation). As it's just the single use, I've open-coded a
getSplatValueAllowUndef() helper there, to discourage use in any other
places.
show more ...
|
|
Revision tags: llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1 |
|
| #
acdc419c |
| 04-Feb-2022 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
[test] Use -passes=instcombine instead of -instcombine in lots of tests. NFC
Another step moving away from the deprecated syntax of specifying pass pipeline in opt.
Differential Revision: https://r
[test] Use -passes=instcombine instead of -instcombine in lots of tests. NFC
Another step moving away from the deprecated syntax of specifying pass pipeline in opt.
Differential Revision: https://reviews.llvm.org/D119081
show more ...
|
|
Revision tags: llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
6c716c85 |
| 29-Dec-2021 |
Sanjay Patel <spatel@rotateright.com> |
[InstCombine] add more folds for unsigned overflow checks
((Op1 + C) & C) u< Op1 --> Op1 != 0 ((Op1 + C) & C) u>= Op1 --> Op1 == 0 Op0 u> ((Op0 + C) & C) --> Op0 != 0 Op0 u<= ((Op0 + C) & C) -
[InstCombine] add more folds for unsigned overflow checks
((Op1 + C) & C) u< Op1 --> Op1 != 0 ((Op1 + C) & C) u>= Op1 --> Op1 == 0 Op0 u> ((Op0 + C) & C) --> Op0 != 0 Op0 u<= ((Op0 + C) & C) --> Op0 == 0
https://alive2.llvm.org/ce/z/iUfXJN https://alive2.llvm.org/ce/z/caAtjj
define i1 @src(i8 %x, i8 %y) { ; the add/mask must be with a low-bit mask (0x01ff...) %y1 = add i8 %y, 1 %pop = call i8 @llvm.ctpop.i8(i8 %y1) %ismask = icmp eq i8 %pop, 1 call void @llvm.assume(i1 %ismask)
%a = add i8 %x, %y %m = and i8 %a, %y %r = icmp ult i8 %m, %x ret i1 %r }
define i1 @tgt(i8 %x, i8 %y) { %r = icmp ne i8 %x, 0 ret i1 %r }
I suspect this can be generalized in some way, but this is the pattern I'm seeing in a motivating test based on issue #52851.
show more ...
|
| #
baa22e93 |
| 29-Dec-2021 |
Sanjay Patel <spatel@rotateright.com> |
[InstCombine] add tests for unsigned overflow of bitmask offset; NFC
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5 |
|
| #
ee34d9b2 |
| 29-Sep-2020 |
Sanjay Patel <spatel@rotateright.com> |
[InstCombine] use redirect of input file in regression tests; NFC
This is a repeat of 1880092722 from 2009. We should have less risk of hitting bugs at this point because we auto-generate positive C
[InstCombine] use redirect of input file in regression tests; NFC
This is a repeat of 1880092722 from 2009. We should have less risk of hitting bugs at this point because we auto-generate positive CHECK lines only, but this makes things consistent.
Copying the original commit msg: "Change tests from "opt %s" to "opt < %s" so that opt doesn't see the input filename so that opt doesn't print the input filename in the output so that grep lines in the tests don't unintentionally match strings in the input filename."
show more ...
|
|
Revision tags: llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1, llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4, llvmorg-10.0.0-rc3, llvmorg-10.0.0-rc2, llvmorg-10.0.0-rc1, llvmorg-11-init, llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1, llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5, llvmorg-9.0.0-rc4 |
|
| #
071ce667 |
| 05-Sep-2019 |
Roman Lebedev <lebedev.ri@gmail.com> |
[NFC][InstCombine] Overhaul 'unsigned add overflow' tests, ensure that all 3 patterns have full test coverage
llvm-svn: 371108
|
| #
ecb7ea1a |
| 05-Sep-2019 |
Roman Lebedev <lebedev.ri@gmail.com> |
[InstCombine] foldICmpBinOp(): consider inverted check in 'unsigned add overflow' check
A follow-up for r342004. This will be changed to produce @llvm.add.with.overflow in a later patch, but for now
[InstCombine] foldICmpBinOp(): consider inverted check in 'unsigned add overflow' check
A follow-up for r342004. This will be changed to produce @llvm.add.with.overflow in a later patch, but for now just make things more consistent overall.
https://rise4fun.com/Alive/qxE
Name: (Op1 + X) u< Op1 --> ~Op1 u< X %t0 = add i8 %Op1, %X %r = icmp ult i8 %t0, %Op1 => %n = xor i8 %Op1, -1 %r = icmp ult i8 %n, %X
Name: (Op1 + X) u>= Op1 --> ~Op1 u>= X %t0 = add i8 %Op1, %X %r = icmp uge i8 %t0, %Op1 => %n = xor i8 %Op1, -1 %r = icmp uge i8 %n, %X
;-------------------------------------------------------------------------------
Name: Op0 u> (Op0 + X) --> X u> ~Op0 %t0 = add i8 %Op0, %X %r = icmp ugt i8 %Op0, %t0 => %n = xor i8 %Op0, -1 %r = icmp ugt i8 %X, %n
Name: Op0 u<= (Op0 + X) --> X u<= ~Op0 %t0 = add i8 %Op0, %X %r = icmp ule i8 %Op0, %t0 => %n = xor i8 %Op0, -1 %r = icmp ule i8 %X, %n
llvm-svn: 371100
show more ...
|
| #
745046c2 |
| 05-Sep-2019 |
Roman Lebedev <lebedev.ri@gmail.com> |
[InstCombine][NFC] Tests for 'unsigned add overflow' check
---------------------------------------- Name: unsigned add, overflow, v0 %add = add i8 %x, %y %ov = icmp ult i8 %add, %x => %agg = u
[InstCombine][NFC] Tests for 'unsigned add overflow' check
---------------------------------------- Name: unsigned add, overflow, v0 %add = add i8 %x, %y %ov = icmp ult i8 %add, %x => %agg = uadd_overflow i8 %x, %y %add = extractvalue {i8, i1} %agg, 0 %ov = extractvalue {i8, i1} %agg, 1
Done: 1 Optimization is correct!
---------------------------------------- Name: unsigned add, overflow, v1 %add = add i8 %x, %y %ov = icmp ult i8 %add, %y => %agg = uadd_overflow i8 %x, %y %add = extractvalue {i8, i1} %agg, 0 %ov = extractvalue {i8, i1} %agg, 1
Done: 1 Optimization is correct!
---------------------------------------- Name: unsigned add, no overflow, v0 %add = add i8 %x, %y %ov = icmp uge i8 %add, %x => %agg = uadd_overflow i8 %x, %y %add = extractvalue {i8, i1} %agg, 0 %not.ov = extractvalue {i8, i1} %agg, 1 %ov = xor %not.ov, -1
Done: 1 Optimization is correct!
---------------------------------------- Name: unsigned add, no overflow, v1 %add = add i8 %x, %y %ov = icmp uge i8 %add, %y => %agg = uadd_overflow i8 %x, %y %add = extractvalue {i8, i1} %agg, 0 %not.ov = extractvalue {i8, i1} %agg, 1 %ov = xor %not.ov, -1
Done: 1 Optimization is correct!
llvm-svn: 371098
show more ...
|