xref: /openbsd-src/gnu/llvm/llvm/lib/CodeGen/README.txt (revision 09467b48e8bc8b4905716062da846024139afbf2)
1*09467b48Spatrick//===---------------------------------------------------------------------===//
2*09467b48Spatrick
3*09467b48SpatrickCommon register allocation / spilling problem:
4*09467b48Spatrick
5*09467b48Spatrick        mul lr, r4, lr
6*09467b48Spatrick        str lr, [sp, #+52]
7*09467b48Spatrick        ldr lr, [r1, #+32]
8*09467b48Spatrick        sxth r3, r3
9*09467b48Spatrick        ldr r4, [sp, #+52]
10*09467b48Spatrick        mla r4, r3, lr, r4
11*09467b48Spatrick
12*09467b48Spatrickcan be:
13*09467b48Spatrick
14*09467b48Spatrick        mul lr, r4, lr
15*09467b48Spatrick        mov r4, lr
16*09467b48Spatrick        str lr, [sp, #+52]
17*09467b48Spatrick        ldr lr, [r1, #+32]
18*09467b48Spatrick        sxth r3, r3
19*09467b48Spatrick        mla r4, r3, lr, r4
20*09467b48Spatrick
21*09467b48Spatrickand then "merge" mul and mov:
22*09467b48Spatrick
23*09467b48Spatrick        mul r4, r4, lr
24*09467b48Spatrick        str r4, [sp, #+52]
25*09467b48Spatrick        ldr lr, [r1, #+32]
26*09467b48Spatrick        sxth r3, r3
27*09467b48Spatrick        mla r4, r3, lr, r4
28*09467b48Spatrick
29*09467b48SpatrickIt also increase the likelihood the store may become dead.
30*09467b48Spatrick
31*09467b48Spatrick//===---------------------------------------------------------------------===//
32*09467b48Spatrick
33*09467b48Spatrickbb27 ...
34*09467b48Spatrick        ...
35*09467b48Spatrick        %reg1037 = ADDri %reg1039, 1
36*09467b48Spatrick        %reg1038 = ADDrs %reg1032, %reg1039, %noreg, 10
37*09467b48Spatrick    Successors according to CFG: 0x8b03bf0 (#5)
38*09467b48Spatrick
39*09467b48Spatrickbb76 (0x8b03bf0, LLVM BB @0x8b032d0, ID#5):
40*09467b48Spatrick    Predecessors according to CFG: 0x8b0c5f0 (#3) 0x8b0a7c0 (#4)
41*09467b48Spatrick        %reg1039 = PHI %reg1070, mbb<bb76.outer,0x8b0c5f0>, %reg1037, mbb<bb27,0x8b0a7c0>
42*09467b48Spatrick
43*09467b48SpatrickNote ADDri is not a two-address instruction. However, its result %reg1037 is an
44*09467b48Spatrickoperand of the PHI node in bb76 and its operand %reg1039 is the result of the
45*09467b48SpatrickPHI node. We should treat it as a two-address code and make sure the ADDri is
46*09467b48Spatrickscheduled after any node that reads %reg1039.
47*09467b48Spatrick
48*09467b48Spatrick//===---------------------------------------------------------------------===//
49*09467b48Spatrick
50*09467b48SpatrickUse local info (i.e. register scavenger) to assign it a free register to allow
51*09467b48Spatrickreuse:
52*09467b48Spatrick        ldr r3, [sp, #+4]
53*09467b48Spatrick        add r3, r3, #3
54*09467b48Spatrick        ldr r2, [sp, #+8]
55*09467b48Spatrick        add r2, r2, #2
56*09467b48Spatrick        ldr r1, [sp, #+4]  <==
57*09467b48Spatrick        add r1, r1, #1
58*09467b48Spatrick        ldr r0, [sp, #+4]
59*09467b48Spatrick        add r0, r0, #2
60*09467b48Spatrick
61*09467b48Spatrick//===---------------------------------------------------------------------===//
62*09467b48Spatrick
63*09467b48SpatrickLLVM aggressively lift CSE out of loop. Sometimes this can be negative side-
64*09467b48Spatrickeffects:
65*09467b48Spatrick
66*09467b48SpatrickR1 = X + 4
67*09467b48SpatrickR2 = X + 7
68*09467b48SpatrickR3 = X + 15
69*09467b48Spatrick
70*09467b48Spatrickloop:
71*09467b48Spatrickload [i + R1]
72*09467b48Spatrick...
73*09467b48Spatrickload [i + R2]
74*09467b48Spatrick...
75*09467b48Spatrickload [i + R3]
76*09467b48Spatrick
77*09467b48SpatrickSuppose there is high register pressure, R1, R2, R3, can be spilled. We need
78*09467b48Spatrickto implement proper re-materialization to handle this:
79*09467b48Spatrick
80*09467b48SpatrickR1 = X + 4
81*09467b48SpatrickR2 = X + 7
82*09467b48SpatrickR3 = X + 15
83*09467b48Spatrick
84*09467b48Spatrickloop:
85*09467b48SpatrickR1 = X + 4  @ re-materialized
86*09467b48Spatrickload [i + R1]
87*09467b48Spatrick...
88*09467b48SpatrickR2 = X + 7 @ re-materialized
89*09467b48Spatrickload [i + R2]
90*09467b48Spatrick...
91*09467b48SpatrickR3 = X + 15 @ re-materialized
92*09467b48Spatrickload [i + R3]
93*09467b48Spatrick
94*09467b48SpatrickFurthermore, with re-association, we can enable sharing:
95*09467b48Spatrick
96*09467b48SpatrickR1 = X + 4
97*09467b48SpatrickR2 = X + 7
98*09467b48SpatrickR3 = X + 15
99*09467b48Spatrick
100*09467b48Spatrickloop:
101*09467b48SpatrickT = i + X
102*09467b48Spatrickload [T + 4]
103*09467b48Spatrick...
104*09467b48Spatrickload [T + 7]
105*09467b48Spatrick...
106*09467b48Spatrickload [T + 15]
107*09467b48Spatrick//===---------------------------------------------------------------------===//
108*09467b48Spatrick
109*09467b48SpatrickIt's not always a good idea to choose rematerialization over spilling. If all
110*09467b48Spatrickthe load / store instructions would be folded then spilling is cheaper because
111*09467b48Spatrickit won't require new live intervals / registers. See 2003-05-31-LongShifts for
112*09467b48Spatrickan example.
113*09467b48Spatrick
114*09467b48Spatrick//===---------------------------------------------------------------------===//
115*09467b48Spatrick
116*09467b48SpatrickWith a copying garbage collector, derived pointers must not be retained across
117*09467b48Spatrickcollector safe points; the collector could move the objects and invalidate the
118*09467b48Spatrickderived pointer. This is bad enough in the first place, but safe points can
119*09467b48Spatrickcrop up unpredictably. Consider:
120*09467b48Spatrick
121*09467b48Spatrick        %array = load { i32, [0 x %obj] }** %array_addr
122*09467b48Spatrick        %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n
123*09467b48Spatrick        %old = load %obj** %nth_el
124*09467b48Spatrick        %z = div i64 %x, %y
125*09467b48Spatrick        store %obj* %new, %obj** %nth_el
126*09467b48Spatrick
127*09467b48SpatrickIf the i64 division is lowered to a libcall, then a safe point will (must)
128*09467b48Spatrickappear for the call site. If a collection occurs, %array and %nth_el no longer
129*09467b48Spatrickpoint into the correct object.
130*09467b48Spatrick
131*09467b48SpatrickThe fix for this is to copy address calculations so that dependent pointers
132*09467b48Spatrickare never live across safe point boundaries. But the loads cannot be copied
133*09467b48Spatricklike this if there was an intervening store, so may be hard to get right.
134*09467b48Spatrick
135*09467b48SpatrickOnly a concurrent mutator can trigger a collection at the libcall safe point.
136*09467b48SpatrickSo single-threaded programs do not have this requirement, even with a copying
137*09467b48Spatrickcollector. Still, LLVM optimizations would probably undo a front-end's careful
138*09467b48Spatrickwork.
139*09467b48Spatrick
140*09467b48Spatrick//===---------------------------------------------------------------------===//
141*09467b48Spatrick
142*09467b48SpatrickThe ocaml frametable structure supports liveness information. It would be good
143*09467b48Spatrickto support it.
144*09467b48Spatrick
145*09467b48Spatrick//===---------------------------------------------------------------------===//
146*09467b48Spatrick
147*09467b48SpatrickThe FIXME in ComputeCommonTailLength in BranchFolding.cpp needs to be
148*09467b48Spatrickrevisited. The check is there to work around a misuse of directives in inline
149*09467b48Spatrickassembly.
150*09467b48Spatrick
151*09467b48Spatrick//===---------------------------------------------------------------------===//
152*09467b48Spatrick
153*09467b48SpatrickIt would be good to detect collector/target compatibility instead of silently
154*09467b48Spatrickdoing the wrong thing.
155*09467b48Spatrick
156*09467b48Spatrick//===---------------------------------------------------------------------===//
157*09467b48Spatrick
158*09467b48SpatrickIt would be really nice to be able to write patterns in .td files for copies,
159*09467b48Spatrickwhich would eliminate a bunch of explicit predicates on them (e.g. no side
160*09467b48Spatrickeffects).  Once this is in place, it would be even better to have tblgen
161*09467b48Spatricksynthesize the various copy insertion/inspection methods in TargetInstrInfo.
162*09467b48Spatrick
163*09467b48Spatrick//===---------------------------------------------------------------------===//
164*09467b48Spatrick
165*09467b48SpatrickStack coloring improvements:
166*09467b48Spatrick
167*09467b48Spatrick1. Do proper LiveStacks analysis on all stack objects including those which are
168*09467b48Spatrick   not spill slots.
169*09467b48Spatrick2. Reorder objects to fill in gaps between objects.
170*09467b48Spatrick   e.g. 4, 1, <gap>, 4, 1, 1, 1, <gap>, 4 => 4, 1, 1, 1, 1, 4, 4
171*09467b48Spatrick
172*09467b48Spatrick//===---------------------------------------------------------------------===//
173*09467b48Spatrick
174*09467b48SpatrickThe scheduler should be able to sort nearby instructions by their address. For
175*09467b48Spatrickexample, in an expanded memset sequence it's not uncommon to see code like this:
176*09467b48Spatrick
177*09467b48Spatrick  movl $0, 4(%rdi)
178*09467b48Spatrick  movl $0, 8(%rdi)
179*09467b48Spatrick  movl $0, 12(%rdi)
180*09467b48Spatrick  movl $0, 0(%rdi)
181*09467b48Spatrick
182*09467b48SpatrickEach of the stores is independent, and the scheduler is currently making an
183*09467b48Spatrickarbitrary decision about the order.
184*09467b48Spatrick
185*09467b48Spatrick//===---------------------------------------------------------------------===//
186*09467b48Spatrick
187*09467b48SpatrickAnother opportunitiy in this code is that the $0 could be moved to a register:
188*09467b48Spatrick
189*09467b48Spatrick  movl $0, 4(%rdi)
190*09467b48Spatrick  movl $0, 8(%rdi)
191*09467b48Spatrick  movl $0, 12(%rdi)
192*09467b48Spatrick  movl $0, 0(%rdi)
193*09467b48Spatrick
194*09467b48SpatrickThis would save substantial code size, especially for longer sequences like
195*09467b48Spatrickthis. It would be easy to have a rule telling isel to avoid matching MOV32mi
196*09467b48Spatrickif the immediate has more than some fixed number of uses. It's more involved
197*09467b48Spatrickto teach the register allocator how to do late folding to recover from
198*09467b48Spatrickexcessive register pressure.
199*09467b48Spatrick
200