xref: /llvm-project/llvm/docs/MemorySSA.rst (revision 56a1f0a022f6d7f6908af0df9e2e0567d5e4712e)
1=========
2MemorySSA
3=========
4
5.. contents::
6   :local:
7
8Introduction
9============
10
11``MemorySSA`` is an analysis that allows us to cheaply reason about the
12interactions between various memory operations. Its goal is to replace
13``MemoryDependenceAnalysis`` for most (if not all) use-cases. This is because,
14unless you're very careful, use of ``MemoryDependenceAnalysis`` can easily
15result in quadratic-time algorithms in LLVM. Additionally, ``MemorySSA`` doesn't
16have as many arbitrary limits as ``MemoryDependenceAnalysis``, so you should get
17better results, too. One common use of ``MemorySSA`` is to quickly find out
18that something definitely cannot happen (for example, reason that a hoist
19out of a loop can't happen).
20
21At a high level, one of the goals of ``MemorySSA`` is to provide an SSA based
22form for memory, complete with def-use and use-def chains, which
23enables users to quickly find may-def and may-uses of memory operations.
24It can also be thought of as a way to cheaply give versions to the complete
25state of memory, and associate memory operations with those versions.
26
27This document goes over how ``MemorySSA`` is structured, and some basic
28intuition on how ``MemorySSA`` works.
29
30A paper on MemorySSA (with notes about how it's implemented in GCC) `can be
31found here <http://www.airs.com/dnovillo/Papers/mem-ssa.pdf>`_. Though, it's
32relatively out-of-date; the paper references multiple memory partitions, but GCC
33eventually swapped to just using one, like we now have in LLVM.  Like
34GCC's, LLVM's MemorySSA is intraprocedural.
35
36
37MemorySSA Structure
38===================
39
40MemorySSA is a virtual IR. After it's built, ``MemorySSA`` will contain a
41structure that maps ``Instruction``\ s to ``MemoryAccess``\ es, which are
42``MemorySSA``'s parallel to LLVM ``Instruction``\ s.
43
44Each ``MemoryAccess`` can be one of three types:
45
46- ``MemoryDef``
47- ``MemoryPhi``
48- ``MemoryUse``
49
50``MemoryDef``\ s are operations which may either modify memory, or which
51introduce some kind of ordering constraints. Examples of ``MemoryDef``\ s
52include ``store``\ s, function calls, ``load``\ s with ``acquire`` (or higher)
53ordering, volatile operations, memory fences, etc. A ``MemoryDef``
54always introduces a new version of the entire memory and is linked with a single
55``MemoryDef/MemoryPhi`` which is the version of memory that the new
56version is based on. This implies that there is a *single*
57``Def`` chain that connects all the ``Def``\ s, either directly
58or indirectly. For example in:
59
60.. code-block:: llvm
61
62  b = MemoryDef(a)
63  c = MemoryDef(b)
64  d = MemoryDef(c)
65
66``d`` is connected directly with ``c`` and indirectly with ``b``.
67This means that ``d`` potentially clobbers (see below) ``c`` *or*
68``b`` *or* both. This in turn implies that without the use of `The walker`_,
69initially every ``MemoryDef`` clobbers every other ``MemoryDef``.
70
71``MemoryPhi``\ s are ``PhiNode``\ s, but for memory operations. If at any
72point we have two (or more) ``MemoryDef``\ s that could flow into a
73``BasicBlock``, the block's top ``MemoryAccess`` will be a
74``MemoryPhi``. As in LLVM IR, ``MemoryPhi``\ s don't correspond to any
75concrete operation. As such, ``BasicBlock``\ s are mapped to ``MemoryPhi``\ s
76inside ``MemorySSA``, whereas ``Instruction``\ s are mapped to ``MemoryUse``\ s
77and ``MemoryDef``\ s.
78
79Note also that in SSA, Phi nodes merge must-reach definitions (that is,
80definitions that *must* be new versions of variables). In MemorySSA, PHI nodes
81merge may-reach definitions (that is, until disambiguated, the versions that
82reach a phi node may or may not clobber a given variable).
83
84``MemoryUse``\ s are operations which use but don't modify memory. An example of
85a ``MemoryUse`` is a ``load``, or a ``readonly`` function call.
86
87Every function that exists has a special ``MemoryDef`` called ``liveOnEntry``.
88It dominates every ``MemoryAccess`` in the function that ``MemorySSA`` is being
89run on, and implies that we've hit the top of the function. It's the only
90``MemoryDef`` that maps to no ``Instruction`` in LLVM IR. Use of
91``liveOnEntry`` implies that the memory being used is either undefined or
92defined before the function begins.
93
94An example of all of this overlaid on LLVM IR (obtained by running ``opt
95-passes='print<memoryssa>' -disable-output`` on an ``.ll`` file) is below. When
96viewing this example, it may be helpful to view it in terms of clobbers.
97The operands of a given ``MemoryAccess`` are all (potential) clobbers of said
98``MemoryAccess``, and the value produced by a ``MemoryAccess`` can act as a clobber
99for other ``MemoryAccess``\ es.
100
101If a ``MemoryAccess`` is a *clobber* of another, it means that these two
102``MemoryAccess``\ es may access the same memory. For example, ``x = MemoryDef(y)``
103means that ``x`` potentially modifies memory that ``y`` modifies/constrains
104(or has modified / constrained).
105In the same manner, ``a = MemoryPhi({BB1,b},{BB2,c})`` means that
106anyone that uses ``a`` is accessing memory potentially modified / constrained
107by either ``b`` or ``c`` (or both).  And finally, ``MemoryUse(x)`` means
108that this use accesses memory that ``x`` has modified / constrained
109(as an example, think that if ``x = MemoryDef(...)``
110and ``MemoryUse(x)`` are in the same loop, the use can't
111be hoisted outside alone).
112
113Another useful way of looking at it is in terms of memory versions.
114In that view, operands of a given ``MemoryAccess`` are the version
115of the entire memory before the operation, and if the access produces
116a value (i.e. ``MemoryDef/MemoryPhi``),
117the value is the new version of the memory after the operation.
118
119.. code-block:: llvm
120
121  define void @foo() {
122  entry:
123    %p1 = alloca i8
124    %p2 = alloca i8
125    %p3 = alloca i8
126    ; 1 = MemoryDef(liveOnEntry)
127    store i8 0, ptr %p3
128    br label %while.cond
129
130  while.cond:
131    ; 6 = MemoryPhi({entry,1},{if.end,4})
132    br i1 undef, label %if.then, label %if.else
133
134  if.then:
135    ; 2 = MemoryDef(6)
136    store i8 0, ptr %p1
137    br label %if.end
138
139  if.else:
140    ; 3 = MemoryDef(6)
141    store i8 1, ptr %p2
142    br label %if.end
143
144  if.end:
145    ; 5 = MemoryPhi({if.then,2},{if.else,3})
146    ; MemoryUse(5)
147    %1 = load i8, ptr %p1
148    ; 4 = MemoryDef(5)
149    store i8 2, ptr %p2
150    ; MemoryUse(1)
151    %2 = load i8, ptr %p3
152    br label %while.cond
153  }
154
155The ``MemorySSA`` IR is shown in comments that precede the instructions they map
156to (if such an instruction exists). For example, ``1 = MemoryDef(liveOnEntry)``
157is a ``MemoryAccess`` (specifically, a ``MemoryDef``), and it describes the LLVM
158instruction ``store i8 0, ptr %p3``. Other places in ``MemorySSA`` refer to this
159particular ``MemoryDef`` as ``1`` (much like how one can refer to ``load i8, ptr
160%p1`` in LLVM with ``%1``). Again, ``MemoryPhi``\ s don't correspond to any LLVM
161Instruction, so the line directly below a ``MemoryPhi`` isn't special.
162
163Going from the top down:
164
165- ``6 = MemoryPhi({entry,1},{if.end,4})`` notes that, when entering
166  ``while.cond``, the reaching definition for it is either ``1`` or ``4``. This
167  ``MemoryPhi`` is referred to in the textual IR by the number ``6``.
168- ``2 = MemoryDef(6)`` notes that ``store i8 0, ptr %p1`` is a definition,
169  and its reaching definition before it is ``6``, or the ``MemoryPhi`` after
170  ``while.cond``. (See the `Use and Def optimization`_ and `Precision`_
171  sections below for why this ``MemoryDef`` isn't linked to a separate,
172  disambiguated ``MemoryPhi``.)
173- ``3 = MemoryDef(6)`` notes that ``store i8 0, ptr %p2`` is a definition; its
174  reaching definition is also ``6``.
175- ``5 = MemoryPhi({if.then,2},{if.else,3})`` notes that the clobber before
176  this block could either be ``2`` or ``3``.
177- ``MemoryUse(5)`` notes that ``load i8, ptr %p1`` is a use of memory, and that
178  it's clobbered by ``5``.
179- ``4 = MemoryDef(5)`` notes that ``store i8 2, ptr %p2`` is a definition; its
180  reaching definition is ``5``.
181- ``MemoryUse(1)`` notes that ``load i8, ptr %p3`` is just a user of memory,
182  and the last thing that could clobber this use is above ``while.cond`` (e.g.
183  the store to ``%p3``). In memory versioning parlance, it really only depends on
184  the memory version 1, and is unaffected by the new memory versions generated since
185  then.
186
187As an aside, ``MemoryAccess`` is a ``Value`` mostly for convenience; it's not
188meant to interact with LLVM IR.
189
190Design of MemorySSA
191===================
192
193``MemorySSA`` is an analysis that can be built for any arbitrary function. When
194it's built, it does a pass over the function's IR in order to build up its
195mapping of ``MemoryAccess``\ es. You can then query ``MemorySSA`` for things
196like the dominance relation between ``MemoryAccess``\ es, and get the
197``MemoryAccess`` for any given ``Instruction`` .
198
199When ``MemorySSA`` is done building, it also hands you a ``MemorySSAWalker``
200that you can use (see below).
201
202
203The walker
204----------
205
206A structure that helps ``MemorySSA`` do its job is the ``MemorySSAWalker``, or
207the walker, for short. The goal of the walker is to provide answers to clobber
208queries beyond what's represented directly by ``MemoryAccess``\ es. For example,
209given:
210
211.. code-block:: llvm
212
213  define void @foo() {
214    %a = alloca i8
215    %b = alloca i8
216
217    ; 1 = MemoryDef(liveOnEntry)
218    store i8 0, ptr %a
219    ; 2 = MemoryDef(1)
220    store i8 0, ptr %b
221  }
222
223The store to ``%a`` is clearly not a clobber for the store to ``%b``. It would
224be the walker's goal to figure this out, and return ``liveOnEntry`` when queried
225for the clobber of ``MemoryAccess`` ``2``.
226
227By default, ``MemorySSA`` provides a walker that can optimize ``MemoryDef``\ s
228and ``MemoryUse``\ s by consulting whatever alias analysis stack you happen to
229be using. Walkers were built to be flexible, though, so it's entirely reasonable
230(and expected) to create more specialized walkers (e.g. one that specifically
231queries ``GlobalsAA``, one that always stops at ``MemoryPhi`` nodes, etc).
232
233Default walker APIs
234^^^^^^^^^^^^^^^^^^^
235
236There are two main APIs used to retrieve the clobbering access using the walker:
237
238-  ``MemoryAccess *getClobberingMemoryAccess(MemoryAccess *MA);`` return the
239   clobbering memory access for ``MA``, caching all intermediate results
240   computed along the way as part of each access queried.
241
242-  ``MemoryAccess *getClobberingMemoryAccess(MemoryAccess *MA, const MemoryLocation &Loc);``
243   returns the access clobbering memory location ``Loc``, starting at ``MA``.
244   Because this API does not request the clobbering access of a specific memory
245   access, there are no results that can be cached.
246
247Locating clobbers yourself
248^^^^^^^^^^^^^^^^^^^^^^^^^^
249
250If you choose to make your own walker, you can find the clobber for a
251``MemoryAccess`` by walking every ``MemoryDef`` that dominates said
252``MemoryAccess``. The structure of ``MemoryDef``\ s makes this relatively simple;
253they ultimately form a linked list of every clobber that dominates the
254``MemoryAccess`` that you're trying to optimize. In other words, the
255``definingAccess`` of a ``MemoryDef`` is always the nearest dominating
256``MemoryDef`` or ``MemoryPhi`` of said ``MemoryDef``.
257
258
259Use and Def optimization
260------------------------
261
262``MemoryUse``\ s keep a single operand, which is their defining or optimized
263access.
264Traditionally ``MemorySSA`` optimized ``MemoryUse``\ s at build-time, up to a
265given threshold.
266Specifically, the operand of every ``MemoryUse`` was optimized to point to the
267actual clobber of said ``MemoryUse``. This can be seen in the above example; the
268second ``MemoryUse`` in ``if.end`` has an operand of ``1``, which is a
269``MemoryDef`` from the entry block.  This is done to make walking,
270value numbering, etc, faster and easier.
271As of `this revision <https://reviews.llvm.org/D121381>`_, the default was
272changed to not optimize uses at build time, in order to provide the option to
273reduce compile-time if the walking is not necessary in a pass. Most users call
274the new API ``ensureOptimizedUses()`` to keep the previous behavior and do a
275one-time optimization of ``MemoryUse``\ s, if this was not done before.
276New pass users are recommended to call ``ensureOptimizedUses()``.
277
278Initially it was not possible to optimize ``MemoryDef``\ s in the same way, as we
279restricted ``MemorySSA`` to one operand per access.
280This was changed and ``MemoryDef``\ s now keep two operands.
281The first one, the defining access, is
282always the previous ``MemoryDef`` or ``MemoryPhi`` in the same basic block, or
283the last one in a dominating predecessor if the current block doesn't have any
284other accesses writing to memory. This is needed for walking Def chains.
285The second operand is the optimized access, if there was a previous call on the
286walker's ``getClobberingMemoryAccess(MA)``. This API will cache information
287as part of ``MA``.
288Optimizing all ``MemoryDef``\ s has quadratic time complexity and is not done
289by default.
290
291A walk of the uses for any MemoryDef can find the accesses that were optimized
292to it.
293A code snippet for such a walk looks like this:
294
295.. code-block:: c++
296
297  MemoryDef *Def;  // find who's optimized or defining for this MemoryDef
298  for (auto &U : Def->uses()) {
299    MemoryAccess *MA = cast<MemoryAccess>(U.getUser());
300    if (auto *DefUser = dyn_cast<MemoryDef>(MA))
301      if (DefUser->isOptimized() && DefUser->getOptimized() == Def) {
302        // User who is optimized to Def
303      } else {
304        // User who's defining access is Def; optimized to something else or not optimized.
305      }
306  }
307
308When ``MemoryUse``\ s are optimized, for a given store,  you can find all loads
309clobbered by that store by walking the immediate and transitive uses of
310the store.
311
312.. code-block:: c++
313
314  checkUses(MemoryAccess *Def) { // Def can be a MemoryDef or a MemoryPhi.
315    for (auto &U : Def->uses()) {
316      MemoryAccess *MA = cast<MemoryAccess>(U.getUser());
317      if (auto *MU = dyn_cast<MemoryUse>(MA)) {
318        // Process MemoryUse as needed.
319      } else {
320        // Process MemoryDef or MemoryPhi as needed.
321
322        // As a user can come up twice, as an optimized access and defining
323        // access, keep a visited list.
324
325        // Check transitive uses as needed
326        checkUses(MA); // use a worklist for an iterative algorithm
327      }
328    }
329  }
330
331An example of similar traversals can be found in the DeadStoreElimination pass.
332
333Invalidation and updating
334-------------------------
335
336Because ``MemorySSA`` keeps track of LLVM IR, it needs to be updated whenever
337the IR is updated. "Update", in this case, includes the addition, deletion, and
338motion of ``Instructions``. The update API is being made on an as-needed basis.
339If you'd like examples, ``GVNHoist`` and ``LICM`` are users of ``MemorySSA``\ s
340update API.
341Note that adding new ``MemoryDef``\ s (by calling ``insertDef``) can be a
342time-consuming update, if the new access triggers many ``MemoryPhi`` insertions and
343renaming (optimization invalidation) of many ``MemoryAccesses``\ es.
344
345
346Phi placement
347^^^^^^^^^^^^^
348
349``MemorySSA`` only places ``MemoryPhi``\ s where they're actually
350needed. That is, it is a pruned SSA form, like LLVM's SSA form.  For
351example, consider:
352
353.. code-block:: llvm
354
355  define void @foo() {
356  entry:
357    %p1 = alloca i8
358    %p2 = alloca i8
359    %p3 = alloca i8
360    ; 1 = MemoryDef(liveOnEntry)
361    store i8 0, ptr %p3
362    br label %while.cond
363
364  while.cond:
365    ; 3 = MemoryPhi({%0,1},{if.end,2})
366    br i1 undef, label %if.then, label %if.else
367
368  if.then:
369    br label %if.end
370
371  if.else:
372    br label %if.end
373
374  if.end:
375    ; MemoryUse(1)
376    %1 = load i8, ptr %p1
377    ; 2 = MemoryDef(3)
378    store i8 2, ptr %p2
379    ; MemoryUse(1)
380    %2 = load i8, ptr %p3
381    br label %while.cond
382  }
383
384Because we removed the stores from ``if.then`` and ``if.else``, a ``MemoryPhi``
385for ``if.end`` would be pointless, so we don't place one. So, if you need to
386place a ``MemoryDef`` in ``if.then`` or ``if.else``, you'll need to also create
387a ``MemoryPhi`` for ``if.end``.
388
389If it turns out that this is a large burden, we can just place ``MemoryPhi``\ s
390everywhere. Because we have Walkers that are capable of optimizing above said
391phis, doing so shouldn't prohibit optimizations.
392
393
394Non-Goals
395---------
396
397``MemorySSA`` is meant to reason about the relation between memory
398operations, and enable quicker querying.
399It isn't meant to be the single source of truth for all potential memory-related
400optimizations. Specifically, care must be taken when trying to use ``MemorySSA``
401to reason about atomic or volatile operations, as in:
402
403.. code-block:: llvm
404
405  define i8 @foo(ptr %a) {
406  entry:
407    br i1 undef, label %if.then, label %if.end
408
409  if.then:
410    ; 1 = MemoryDef(liveOnEntry)
411    %0 = load volatile i8, ptr %a
412    br label %if.end
413
414  if.end:
415    %av = phi i8 [0, %entry], [%0, %if.then]
416    ret i8 %av
417  }
418
419Going solely by ``MemorySSA``'s analysis, hoisting the ``load`` to ``entry`` may
420seem legal. Because it's a volatile load, though, it's not.
421
422
423Design tradeoffs
424----------------
425
426Precision
427^^^^^^^^^
428
429``MemorySSA`` in LLVM deliberately trades off precision for speed.
430Let us think about memory variables as if they were disjoint partitions of the
431memory (that is, if you have one variable, as above, it represents the entire
432memory, and if you have multiple variables, each one represents some
433disjoint portion of the memory)
434
435First, because alias analysis results conflict with each other, and
436each result may be what an analysis wants (IE
437TBAA may say no-alias, and something else may say must-alias), it is
438not possible to partition the memory the way every optimization wants.
439Second, some alias analysis results are not transitive (IE A noalias B,
440and B noalias C, does not mean A noalias C), so it is not possible to
441come up with a precise partitioning in all cases without variables to
442represent every pair of possible aliases.  Thus, partitioning
443precisely may require introducing at least N^2 new virtual variables,
444phi nodes, etc.
445
446Each of these variables may be clobbered at multiple def sites.
447
448To give an example, if you were to split up struct fields into
449individual variables, all aliasing operations that may-def multiple struct
450fields, will may-def more than one of them.  This is pretty common (calls,
451copies, field stores, etc).
452
453Experience with SSA forms for memory in other compilers has shown that
454it is simply not possible to do this precisely, and in fact, doing it
455precisely is not worth it, because now all the optimizations have to
456walk tons and tons of virtual variables and phi nodes.
457
458So we partition.  At the point at which you partition, again,
459experience has shown us there is no point in partitioning to more than
460one variable.  It simply generates more IR, and optimizations still
461have to query something to disambiguate further anyway.
462
463As a result, LLVM partitions to one variable.
464
465Precision in practice
466^^^^^^^^^^^^^^^^^^^^^
467
468In practice, there are implementation details in LLVM that also affect the
469results' precision provided by ``MemorySSA``. For example, AliasAnalysis has various
470caps, or restrictions on looking through phis which can affect what ``MemorySSA``
471can infer. Changes made by different passes may make MemorySSA either "overly
472optimized" (it can provide a more accurate result than if it were recomputed
473from scratch), or "under optimized" (it could infer more if it were recomputed).
474This can lead to challenges to reproduced results in isolation with a single pass
475when the result relies on the state acquired by ``MemorySSA`` due to being updated by
476multiple subsequent passes.
477Passes that use and update ``MemorySSA`` should do so through the APIs provided by the
478``MemorySSAUpdater``, or through calls on the Walker.
479Direct optimizations to ``MemorySSA`` are not permitted.
480There is currently a single, narrowly scoped exception where DSE (DeadStoreElimination)
481updates an optimized access of a store, after a traversal that guarantees the
482optimization is correct. This is solely allowed due to the traversals and inferences
483being beyond what ``MemorySSA`` does and them being "free" (i.e. DSE does them anyway).
484This exception is set under a flag ("-dse-optimize-memoryssa") and can be disabled to
485help reproduce optimizations in isolation.
486
487
488LLVM Developers Meeting presentations
489-------------------------------------
490
491- `2016 LLVM Developers' Meeting: G. Burgess - MemorySSA in Five Minutes <https://www.youtube.com/watch?v=bdxWmryoHak>`_.
492- `2020 LLVM Developers' Meeting: S. Baziotis & S. Moll - Finding Your Way Around the LLVM Dependence Analysis Zoo <https://www.youtube.com/watch?v=1e5y6WDbXCQ>`_
493