Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5 |
|
#
9df63b26 |
| 30-Nov-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][Transforms] Add 1:N `matchAndRewrite` overload (#116470)
This commit adds a new `matchAndRewrite` overload to `ConversionPattern`
to support 1:N replacements. This is the first of two main P
[mlir][Transforms] Add 1:N `matchAndRewrite` overload (#116470)
This commit adds a new `matchAndRewrite` overload to `ConversionPattern`
to support 1:N replacements. This is the first of two main PRs that
merge the 1:1 and 1:N dialect conversion drivers.
The existing `matchAndRewrite` function supports only 1:1 replacements,
as can be seen from the `ArrayRef<Value>` parameter.
```c++
LogicalResult ConversionPattern::matchAndRewrite(
Operation *op, ArrayRef<Value> operands /*adaptor values*/,
ConversionPatternRewriter &rewriter) const;
```
This commit adds a `matchAndRewrite` overload that is called by the
dialect conversion driver. By default, this new overload dispatches to
the original 1:1 `matchAndRewrite` implementation. Existing
`ConversionPattern`s do not need to be changed as long as there are no
1:N type conversions or value replacements.
```c++
LogicalResult ConversionPattern::matchAndRewrite(
Operation *op, ArrayRef<ValueRange> operands /*adaptor values*/,
ConversionPatternRewriter &rewriter) const {
// Note: getOneToOneAdaptorOperands produces a fatal error if at least one
// ValueRange has 0 or more than 1 value.
return matchAndRewrite(op, getOneToOneAdaptorOperands(operands), rewriter);
}
```
The `ConversionValueMapping`, which keeps track of value replacements
and materializations, still does not support 1:N replacements. We still
rely on argument materializations to convert N replacement values back
into a single value. The `ConversionValueMapping` will be generalized to
1:N mappings in the second main PR.
Before handing the adaptor values to a `ConversionPattern`, all argument
materializations are "unpacked". The `ConversionPattern` receives N
replacement values and does not see any argument materializations. This
implementation strategy allows us to use the 1:N infrastructure/API in
`ConversionPattern`s even though some functionality is still missing in
the driver. This strategy was chosen to keep the sizes of the PRs
smaller and to make it easier for downstream users to adapt to API
changes.
This commit also updates the the "decompose call graphs" transformation
and the "sparse tensor codegen" transformation to use the new 1:N
`ConversionPattern` API.
Note for LLVM conversion: If you are using a type converter with 1:N
type conversion rules or if your patterns are performing 1:N
replacements (via `replaceOpWithMultiple` or
`applySignatureConversion`), conversion pattern applications will start
failing (fatal LLVM error) with this error message: `pattern 'name' does
not support 1:N conversion`. The name of the failing pattern is shown in
the error message. These patterns must be updated to the new 1:N
`matchAndRewrite` API.
show more ...
|
Revision tags: llvmorg-19.1.4 |
|
#
204234a6 |
| 19-Nov-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][SparseTensor][NFC] Pass tensor type to descriptor helper (#116468)
`getDescriptorFromTensorTuple` and `getMutDescriptorFromTensorTuple`
extract the tensor type from an `unrealized_conversion
[mlir][SparseTensor][NFC] Pass tensor type to descriptor helper (#116468)
`getDescriptorFromTensorTuple` and `getMutDescriptorFromTensorTuple`
extract the tensor type from an `unrealized_conversion_cast` op that
serves as a workaround for missing 1:N dialect conversion support.
This commit changes these functions so that they explicitly receive the
tensor type as a function argument. This is in preparation of merging
the 1:1 and 1:N conversion drivers. The conversion patterns in this file
will soon start receiving multiple SSA values (`ValueRange`) from their
adaptors (instead of a single value that is the result of
`unrealized_conversion_cast`). It will no longer be possible to take the
tensor type from the `unrealized_conversion_cast` op. The
`unrealized_conversion_cast` workaround will disappear entirely.
show more ...
|
#
aed43562 |
| 14-Nov-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][Transforms] Dialect Conversion: Add `replaceOpWithMultiple` (#115816)
This commit adds a new function
`ConversionPatternRewriter::replaceOpWithMultiple`. This function is
similar to `replac
[mlir][Transforms] Dialect Conversion: Add `replaceOpWithMultiple` (#115816)
This commit adds a new function
`ConversionPatternRewriter::replaceOpWithMultiple`. This function is
similar to `replaceOp`, but it accepts multiple `ValueRange`
replacements, one per op result.
Note: This function is not an overload of `replaceOp` because of
ambiguous overload resolution that would make the API difficult to use.
This commit aligns "block signature conversions" with "op replacements":
both support 1:N replacements now. Due to incomplete 1:N support in the
dialect conversion driver, an argument materialization is inserted when
an SSA value is replaced with multiple values; same as block signature
conversions already work around the problem. These argument
materializations are going to be removed in a subsequent commit that
adds full 1:N support. The purpose of this PR is to add missing features
gradually in small increments.
This commit also updates two MLIR transformations that have their custom
workarounds around missing 1:N support. These can already start using
`replaceOpWithMultiple`.
Co-authored-by: Markus Böck <markus.boeck02@gmail.com>
show more ...
|
Revision tags: llvmorg-19.1.3, llvmorg-19.1.2 |
|
#
206fad0e |
| 05-Oct-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][NFC] Mark type converter in `populate...` functions as `const` (#111250)
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.
Patterns
[mlir][NFC] Mark type converter in `populate...` functions as `const` (#111250)
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.
Patterns already take a `const` type converter. However, some
`populate...` functions do not only add new patterns, but also add
additional type conversion rules. That makes it difficult to find the
place where a type conversion was added in the code base. With this
change, all `populate...` functions that only populate pattern now have
a `const` type converter. Programmers can then conclude from the
function signature that these functions do not register any new type
conversion rules.
Also some minor cleanups around the 1:N dialect conversion
infrastructure, which did not always pass the type converter as a
`const` object internally.
show more ...
|
Revision tags: llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6 |
|
#
c4e5a8a4 |
| 08-May-2024 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] support 'batch' dimensions in sparse_tensor.print (#91411)
|
#
5c511655 |
| 07-May-2024 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] force a properly sized view on pos/crd/val under codegen (#91288)
Codegen "vectors" for pos/crd/val use the capacity as memref size, not
the actual used size. Although the sparsifier
[mlir][sparse] force a properly sized view on pos/crd/val under codegen (#91288)
Codegen "vectors" for pos/crd/val use the capacity as memref size, not
the actual used size. Although the sparsifier itself always uses just
the defined pos/crd/val parts, printing these and passing them back to a
runtime environment could benefit from wrapping the basic pos/crd/val
getters into a proper memref view that sets the right size.
show more ...
|
Revision tags: llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2 |
|
#
e8e8df4c |
| 15-Mar-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][sparse] Add `has_runtime_library` test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime librar
[mlir][sparse] Add `has_runtime_library` test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.
This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.
This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.
Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
show more ...
|
#
94e27c26 |
| 12-Mar-2024 |
Peiming Liu <peiming@google.com> |
[mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
|
Revision tags: llvmorg-18.1.1 |
|
#
fc9f1d49 |
| 06-Mar-2024 |
Peiming Liu <peiming@google.com> |
[mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
|
#
52b69aa3 |
| 04-Mar-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] support sparsifying batch levels (#83898)
|
#
6bc7c9df |
| 29-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
|
#
0d1f9576 |
| 27-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] support type conversion from batched sparse tensors to… (#83163)
… memrefs.
|
Revision tags: llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3 |
|
#
5248a987 |
| 21-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
|
#
e5924d64 |
| 08-Feb-2024 |
Yinying Li <107574043+yinying-lisa-li@users.noreply.github.com> |
[mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments t
[mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments to n:m.
show more ...
|
Revision tags: llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init |
|
#
365777ec |
| 12-Dec-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for the build (as
well as a bit of cleanup of that file to be less verbose and more
flexible).
show more ...
|
#
5b729503 |
| 30-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workarou
[mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
show more ...
|
Revision tags: llvmorg-17.0.6 |
|
#
1944c4f7 |
| 27-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
show more ...
|
#
1dd387e1 |
| 22-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, sinc
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
show more ...
|
#
d213220a |
| 22-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] fixed naming consistency (#73053)
All DLT related methods have DLT at end, removed stale TODO
|
#
2cc4b3d0 |
| 20-Nov-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] code cleanup using the assumption that dim2lvl maps ar… (#72894)
…e simplified.
|
#
83cf0dc9 |
| 17-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] implement direct IR alloc/empty/new for non-permutations (#72585)
This change implements the correct *level* sizes set up for the direct
IR codegen fields in the sparse storage schem
[mlir][sparse] implement direct IR alloc/empty/new for non-permutations (#72585)
This change implements the correct *level* sizes set up for the direct
IR codegen fields in the sparse storage scheme. This brings libgen and
codegen together again.
This is step 3 out of 3 to make sparse_tensor.new work for BSR
show more ...
|
#
2323f48e |
| 16-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] refactor dim2lvl/lvl2dim lvlsizes setup (#72474)
This change provides access to the individual components of dim sizes
and lvl sizes after each codegenutil call.
This is step 2 ou
[mlir][sparse] refactor dim2lvl/lvl2dim lvlsizes setup (#72474)
This change provides access to the individual components of dim sizes
and lvl sizes after each codegenutil call.
This is step 2 out of 3 to make sparse_tensor.new work for BSR
show more ...
|
Revision tags: llvmorg-17.0.5 |
|
#
af8428c0 |
| 13-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)
Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format)
[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)
Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format) but now at least both
the direct IR and support library path behave exactly the same.
Generalizing the ops is still TBD.
show more ...
|
#
160d483b |
| 07-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] implement loose-compressed/2:4 on direct IR codegen path (#71461)
Fills in the missing cases for direct IR codegen.
Note that non-permutation handling is still TBD.
|
#
bc61122a |
| 03-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] reformat SparseTensorCodegen file (#71231)
|