xref: /minix3/external/bsd/llvm/dist/llvm/test/CodeGen/X86/shift-combine-crash.ll (revision 0a6a1f1d05b60e214de2f05a7310ddd1f0e590e7)
1*0a6a1f1dSLionel Sambuc; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu -mcpu=corei7 > /dev/null
2*0a6a1f1dSLionel Sambuc
3*0a6a1f1dSLionel Sambuc; Verify that DAGCombiner doesn't crash with an assertion failure in the
4*0a6a1f1dSLionel Sambuc; attempt to cast a ISD::UNDEF node to a ConstantSDNode.
5*0a6a1f1dSLionel Sambuc
6*0a6a1f1dSLionel Sambuc; During type legalization, the vector shift operation in function @test1 is
7*0a6a1f1dSLionel Sambuc; split into two legal shifts that work on <2 x i64> elements.
8*0a6a1f1dSLionel Sambuc; The first shift of the legalized sequence would be a shift by all undefs.
9*0a6a1f1dSLionel Sambuc; DAGCombiner will then try to simplify the vector shift and check if the
10*0a6a1f1dSLionel Sambuc; vector of shift counts is a splat. Make sure that llc doesn't crash
11*0a6a1f1dSLionel Sambuc; at that stage.
12*0a6a1f1dSLionel Sambuc
13*0a6a1f1dSLionel Sambuc
14*0a6a1f1dSLionel Sambucdefine <4 x i64> @test1(<4 x i64> %A) {
15*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 undef, i64 undef, i64 1, i64 2>
16*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
17*0a6a1f1dSLionel Sambuc}
18*0a6a1f1dSLionel Sambuc
19*0a6a1f1dSLionel Sambuc; Also, verify that DAGCombiner doesn't crash when trying to combine shifts
20*0a6a1f1dSLionel Sambuc; with different combinations of undef elements in the vector shift count.
21*0a6a1f1dSLionel Sambuc
22*0a6a1f1dSLionel Sambucdefine <4 x i64> @test2(<4 x i64> %A) {
23*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 2, i64 3, i64 undef, i64 undef>
24*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
25*0a6a1f1dSLionel Sambuc}
26*0a6a1f1dSLionel Sambuc
27*0a6a1f1dSLionel Sambucdefine <4 x i64> @test3(<4 x i64> %A) {
28*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 2, i64 undef, i64 3, i64 undef>
29*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
30*0a6a1f1dSLionel Sambuc}
31*0a6a1f1dSLionel Sambuc
32*0a6a1f1dSLionel Sambucdefine <4 x i64> @test4(<4 x i64> %A) {
33*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 undef, i64 2, i64 undef, i64 3>
34*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
35*0a6a1f1dSLionel Sambuc}
36*0a6a1f1dSLionel Sambuc
37*0a6a1f1dSLionel Sambucdefine <4 x i64> @test5(<4 x i64> %A) {
38*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 2, i64 undef, i64 undef, i64 undef>
39*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
40*0a6a1f1dSLionel Sambuc}
41*0a6a1f1dSLionel Sambuc
42*0a6a1f1dSLionel Sambucdefine <4 x i64> @test6(<4 x i64> %A) {
43*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 undef, i64 undef, i64 3, i64 undef>
44*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
45*0a6a1f1dSLionel Sambuc}
46*0a6a1f1dSLionel Sambuc
47*0a6a1f1dSLionel Sambucdefine <4 x i64> @test7(<4 x i64> %A) {
48*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 undef, i64 undef, i64 undef, i64 3>
49*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
50*0a6a1f1dSLionel Sambuc}
51*0a6a1f1dSLionel Sambuc
52*0a6a1f1dSLionel Sambucdefine <4 x i64> @test8(<4 x i64> %A) {
53*0a6a1f1dSLionel Sambuc  %shl = shl <4 x i64> %A, <i64 undef, i64 undef, i64 undef, i64 undef>
54*0a6a1f1dSLionel Sambuc  ret <4 x i64> %shl
55*0a6a1f1dSLionel Sambuc}
56*0a6a1f1dSLionel Sambuc
57*0a6a1f1dSLionel Sambuc
58