Lines Matching full:lowering
1 # Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization
8 progressive lowering through a mix of dialects coexisting in the same function.
16 [next chapter](Ch-6.md) directly target the `LLVM IR` dialect for lowering
17 `print`. As part of this lowering, we will be lowering from the
55 for further optimization. To start off the lowering, we first define our
61 // final target for this lowering.
65 // this lowering. In our case, we are lowering to a combination of the
72 // a partial lowering, we explicitly mark the Toy operations that don't want
103 old. For our lowering, this invariant will be useful as it translates from the
106 look at a snippet of lowering the `toy.transpose` operation:
147 Now we can prepare the list of patterns to use during the lowering process:
161 ## Partial Lowering
163 Once the patterns have been defined, we can perform the actual lowering. The
164 `DialectConversion` framework provides several different modes of lowering, but,
165 for our purposes, we will perform a partial lowering, as we will not convert
180 ### Design Considerations With Partial Lowering argument
182 Before diving into the result of our lowering, this is a good time to discuss
183 potential design considerations when it comes to partial lowering. In our
184 lowering, we transform from a value-type, TensorType, to an allocated
215 For the sake of simplicity, we will use the third option for this lowering. This
224 // We also allow a F64MemRef to enable interop during partial lowering.
243 With affine lowering added to our pipeline, we can now generate:
297 Our naive lowering is correct, but it leaves a lot to be desired with regards to
298 efficiency. For example, the lowering of `toy.mul` has generated some redundant
345 and try yourself: `toyc-ch5 test/Examples/Toy/Ch5/affine-lowering.mlir
348 In this chapter we explored some aspects of partial lowering, with the intent to