Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2 |
|
#
b50ce4c8 |
| 02-Oct-2024 |
Mateusz Sokół <8431159+mtsokol@users.noreply.github.com> |
[MLIR][sparse] Add `soa` property to `sparse_tensor` Python bindings (#109135)
|
Revision tags: llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2 |
|
#
36c57532 |
| 30-Jul-2024 |
Adrian Kuegel <akuegel@google.com> |
[mlir] Apply ClangTidy performance finding.
|
Revision tags: llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5 |
|
#
a10d67f9 |
| 24-Apr-2024 |
Yinying Li <yinyingli@google.com> |
[mlir][sparse] Enable explicit and implicit value in sparse encoding (#88975)
1. Explicit value means the non-zero value in a sparse tensor. If
explicitVal is set, then all the non-zero values in t
[mlir][sparse] Enable explicit and implicit value in sparse encoding (#88975)
1. Explicit value means the non-zero value in a sparse tensor. If
explicitVal is set, then all the non-zero values in the tensor have the
same explicit value. The default value Attribute() indicates that it is
not set.
2. Implicit value means the "zero" value in a sparse tensor. If
implicitVal is set, then the "zero" value in the tensor is equal to the
implicit value. For now, we only support `0` as the implicit value but
it could be extended in the future. The default value Attribute()
indicates that the implicit value is `0` (same type as the tensor
element type).
Example:
```
#CSR = #sparse_tensor.encoding<{
map = (d0, d1) -> (d0 : dense, d1 : compressed),
posWidth = 64,
crdWidth = 64,
explicitVal = 1 : i64,
implicitVal = 0 : i64
}>
```
Note: this PR tests that implicitVal could be set to other values as
well. The following PR will add verifier and reject any value that's not
zero for implicitVal.
show more ...
|
Revision tags: llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3 |
|
#
aaf91645 |
| 15-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
Reapply "[mlir][sparse] remove LevelType enum, construct LevelType from LevelFormat and Properties" (#81923) (#81934)
|
#
513448d2 |
| 15-Feb-2024 |
Mehdi Amini <joker.eph@gmail.com> |
Revert "[mlir][sparse] remove LevelType enum, construct LevelType from LevelF…" (#81923)
Reverts llvm/llvm-project#81799 ; this broke the mlir gcc7 bot.
|
#
235ec0f7 |
| 15-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] remove LevelType enum, construct LevelType from LevelF… (#81799)
…ormat and properties instead.
|
#
429919e3 |
| 14-Feb-2024 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse][pybind][CAPI] remove LevelType enum from CAPI, constru… (#81682)
…ct LevelType from LevelFormat and properties instead.
**Rationale**
We used to explicitly declare every possible
[mlir][sparse][pybind][CAPI] remove LevelType enum from CAPI, constru… (#81682)
…ct LevelType from LevelFormat and properties instead.
**Rationale**
We used to explicitly declare every possible combination between
`LevelFormat` and `LevelProperties`, and it now becomes difficult to
scale as more properties/level formats are going to be introduced.
show more ...
|
#
2a6b521b |
| 09-Feb-2024 |
Yinying Li <107574043+yinying-lisa-li@users.noreply.github.com> |
[mlir][sparse] Add more tests and verification for n:m (#81186)
1. Add python test for n out of m
2. Add more methods for python binding
3. Add verification for n:m and invalid encoding tests
4.
[mlir][sparse] Add more tests and verification for n:m (#81186)
1. Add python test for n out of m
2. Add more methods for python binding
3. Add verification for n:m and invalid encoding tests
4. Add e2e test for n:m
Previous PRs for n:m #80501 #79935
show more ...
|
#
e5924d64 |
| 08-Feb-2024 |
Yinying Li <107574043+yinying-lisa-li@users.noreply.github.com> |
[mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments t
[mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments to n:m.
show more ...
|
Revision tags: llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6 |
|
#
1944c4f7 |
| 27-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
show more ...
|
Revision tags: llvmorg-17.0.5, llvmorg-17.0.4 |
|
#
d4088e7d |
| 17-Oct-2023 |
Yinying Li <107574043+yinying-lisa-li@users.noreply.github.com> |
[mlir][sparse] Populate lvlToDim (#68937)
Updates:
1. Infer lvlToDim from dimToLvl
2. Add more tests for block sparsity
3. Finish TODOs related to lvlToDim, including adding lvlToDim to python
b
[mlir][sparse] Populate lvlToDim (#68937)
Updates:
1. Infer lvlToDim from dimToLvl
2. Add more tests for block sparsity
3. Finish TODOs related to lvlToDim, including adding lvlToDim to python
binding
Verification of lvlToDim that user provides will be implemented in the
next PR.
show more ...
|
Revision tags: llvmorg-17.0.3, llvmorg-17.0.2 |
|
#
836411b9 |
| 22-Sep-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] add lvlToDim field to sparse tensor encoding (#67194)
Note the new surface syntax allows for defining a dimToLvl and lvlToDim
map at once (where usually the latter can be inferred fr
[mlir][sparse] add lvlToDim field to sparse tensor encoding (#67194)
Note the new surface syntax allows for defining a dimToLvl and lvlToDim
map at once (where usually the latter can be inferred from the former,
but not always). This revision adds storage for the latter, together
with some intial boilerplate. The actual support (inference, validation,
printing, etc.) is still TBD of course.
show more ...
|
Revision tags: llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5 |
|
#
76647fce |
| 30-May-2023 |
wren romano <2998727+wrengr@users.noreply.github.com> |
[mlir][sparse] Combining `dimOrdering`+`higherOrdering` fields into `dimToLvl`
This is a major step along the way towards the new STEA design. While a great deal of this patch is simple renaming, t
[mlir][sparse] Combining `dimOrdering`+`higherOrdering` fields into `dimToLvl`
This is a major step along the way towards the new STEA design. While a great deal of this patch is simple renaming, there are several significant changes as well. I've done my best to ensure that this patch retains the previous behavior and error-conditions, even though those are at odds with the eventual intended semantics of the `dimToLvl` mapping. Since the majority of the compiler does not yet support non-permutations, I've also added explicit assertions in places that previously had implicitly assumed it was dealing with permutations.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D151505
show more ...
|
#
a0615d02 |
| 17-May-2023 |
wren romano <2998727+wrengr@users.noreply.github.com> |
[mlir][sparse] Renaming the STEA field `dimLevelType` to `lvlTypes`
This commit is part of the migration of towards the new STEA syntax/design. In particular, this commit includes the following cha
[mlir][sparse] Renaming the STEA field `dimLevelType` to `lvlTypes`
This commit is part of the migration of towards the new STEA syntax/design. In particular, this commit includes the following changes: * Renaming compiler-internal functions/methods: * `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}` * `Merger::{getDimLevelType => getLvlType}` (for consistency) * `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods) * Renaming external facets to match: * the STEA parser and printer * the C and Python bindings * PyTACO
However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D150330
show more ...
|
Revision tags: llvmorg-16.0.4 |
|
#
5550c821 |
| 08-May-2023 |
Tres Popp <tpopp@google.com> |
[mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast functionali
[mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast functionality in addition to defining methods with the same name. This change begins the migration of uses of the method to the corresponding function call as has been decided as more consistent.
Note that there still exist classes that only define methods directly, such as AffineExpr, and this does not include work currently to support a functional cast/isa call.
Caveats include: - This clang-tidy script probably has more problems. - This only touches C++ code, so nothing that is being generated.
Context: - https://mlir.llvm.org/deprecation/ at "Use the free function variants for dyn_cast/cast/isa/…" - Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443
Implementation: This first patch was created with the following steps. The intention is to only do automated changes at first, so I waste less time if it's reverted, and so the first mass change is more clear as an example to other teams that will need to follow similar steps.
Steps are described per line, as comments are removed by git: 0. Retrieve the change from the following to build clang-tidy with an additional check: https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check 1. Build clang-tidy 2. Run clang-tidy over your entire codebase while disabling all checks and enabling the one relevant one. Run on all header files also. 3. Delete .inc files that were also modified, so the next build rebuilds them to a pure state. 4. Some changes have been deleted for the following reasons: - Some files had a variable also named cast - Some files had not included a header file that defines the cast functions - Some files are definitions of the classes that have the casting methods, so the code still refers to the method instead of the function without adding a prefix or removing the method declaration at the same time.
``` ninja -C $BUILD_DIR clang-tidy
run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\ -header-filter=mlir/ mlir/* -fix
rm -rf $BUILD_DIR/tools/mlir/**/*.inc
git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\ mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\ mlir/lib/**/IR/\ mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\ mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\ mlir/test/lib/Dialect/Test/TestTypes.cpp\ mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\ mlir/test/lib/Dialect/Test/TestAttributes.cpp\ mlir/unittests/TableGen/EnumsGenTest.cpp\ mlir/test/python/lib/PythonTestCAPI.cpp\ mlir/include/mlir/IR/ ```
Differential Revision: https://reviews.llvm.org/D150123
show more ...
|
Revision tags: llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4 |
|
#
84cd51bb |
| 06-Mar-2023 |
wren romano <2998727+wrengr@users.noreply.github.com> |
[mlir][sparse] Renaming "pointer/index" to "position/coordinate"
The old "pointer/index" names often cause confusion since these names clash with names of unrelated things in MLIR; so this change re
[mlir][sparse] Renaming "pointer/index" to "position/coordinate"
The old "pointer/index" names often cause confusion since these names clash with names of unrelated things in MLIR; so this change rectifies this by changing everything to use "position/coordinate" terminology instead.
In addition to the basic terminology, there have also been various conventions for making certain distinctions like: (1) the overall storage for coordinates in the sparse-tensor, vs the particular collection of coordinates of a given element; and (2) particular coordinates given as a `Value` or `TypedValue<MemRefType>`, vs particular coordinates given as `ValueRange` or similar. I have striven to maintain these distinctions as follows:
* "p/c" are used for individual position/coordinate values, when there is no risk of confusion. (Just like we use "d/l" to abbreviate "dim/lvl".)
* "pos/crd" are used for individual position/coordinate values, when a longer name is helpful to avoid ambiguity or to form compound names (e.g., "parentPos"). (Just like we use "dim/lvl" when we need a longer form of "d/l".)
I have also used these forms for a handful of compound names where the old name had been using a three-letter form previously, even though a longer form would be more appropriate. I've avoided renaming these to use a longer form purely for expediency sake, since changing them would require a cascade of other renamings. They should be updated to follow the new naming scheme, but that can be done in future patches.
* "coords" is used for the complete collection of crd values associated with a single element. In the runtime library this includes both `std::vector` and raw pointer representations. In the compiler, this is used specifically for buffer variables with C++ type `Value`, `TypedValue<MemRefType>`, etc.
The bare form "coords" is discouraged, since it fails to make the dim/lvl distinction; so the compound names "dimCoords/lvlCoords" should be used instead. (Though there may exist a rare few cases where is is appropriate to be intentionally ambiguous about what coordinate-space the coords live in; in which case the bare "coords" is appropriate.)
There is seldom the need for the pos variant of this notion. In most circumstances we use the term "cursor", since the same buffer is reused for a 'moving' pos-collection.
* "dcvs/lcvs" is used in the compiler as the `ValueRange` analogue of "dimCoords/lvlCoords". (The "vs" stands for "`Value`s".) I haven't found the need for it, but "pvs" would be the obvious name for a pos-`ValueRange`.
The old "ind"-vs-"ivs" naming scheme does not seem to have been sustained in more recent code, which instead prefers other mnemonics (e.g., adding "Buf" to the end of the names for `TypeValue<MemRefType>`). I have cleaned up a lot of these to follow the "coords"-vs-"cvs" naming scheme, though haven't done an exhaustive cleanup.
* "positions/coordinates" are used for larger collections of pos/crd values; in particular, these are used when referring to the complete sparse-tensor storage components.
I also prefer to use these unabbreviated names in the documentation, unless there is some specific reason why using the abbreviated forms helps resolve ambiguity.
In addition to making this terminology change, this change also does some cleanup along the way: * correcting the dim/lvl terminology in certain places. * adding `const` when it requires no other code changes. * miscellaneous cleanup that was entailed in order to make the proper distinctions. Most of these are in CodegenUtils.{h,cpp}
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144773
show more ...
|
Revision tags: llvmorg-16.0.0-rc3 |
|
#
f708a549 |
| 15-Feb-2023 |
wren romano <2998727+wrengr@users.noreply.github.com> |
[mlir][sparse] Factoring out SparseTensorType class
This change adds a new `SparseTensorType` class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences bet
[mlir][sparse] Factoring out SparseTensorType class
This change adds a new `SparseTensorType` class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences between sparse-tensors and dense-tensors. In addition, this change also adds new type aliases `Dimension`, `Level`, and `FieldIndex` to make code more self-documenting.
Although the diff is very large, the majority of the changes are mechanical in nature (e.g., changing types to use the new aliases, updating variable names to match, etc). Along the way I also made many variables `const` when they could be; the majority of which required only adding the keyword. A few places had conditional definitions of these variables, requiring actual code changes; however, that was only done when the overall change was extremely local and easy to extract. All these changes are included in the current patch only because it would be too onerous to split them off into a separate patch.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D143800
show more ...
|
Revision tags: llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3 |
|
#
0e77b63b |
| 14-Oct-2022 |
wren romano <2998727+wrengr@users.noreply.github.com> |
[mlir][sparse] Use the runtime DimLevelType instead of a separate tablegen enum
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differentia
[mlir][sparse] Use the runtime DimLevelType instead of a separate tablegen enum
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime
Depends On D135995
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135996
show more ...
|
Revision tags: working |
|
#
c48e9087 |
| 04-Oct-2022 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] introduce a higher-order tensor mapping
This extension to the sparse tensor type system in MLIR opens up a whole new set of sparse storage schemes, such as block sparse storage (e.g.
[mlir][sparse] introduce a higher-order tensor mapping
This extension to the sparse tensor type system in MLIR opens up a whole new set of sparse storage schemes, such as block sparse storage (e.g. BCSR) and ELL (aka jagged diagonals).
This revision merely introduces the type extension and initial documentation. The actual interpretation of the type (reading in tensors, lowering to code, etc.) will follow.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D135206
show more ...
|
Revision tags: llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0 |
|
#
1b434652 |
| 29-Aug-2022 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] add more dimension level types and properties
We recently removed the singleton dimension level type (see the revision https://reviews.llvm.org/D131002) since it was unimplemented but
[mlir][sparse] add more dimension level types and properties
We recently removed the singleton dimension level type (see the revision https://reviews.llvm.org/D131002) since it was unimplemented but also incomplete (properties were missing). This revision add singleton back as extra dimension level type, together with properties ordered/not-ordered and unique/not-unique. Even though still not lowered to actual code, this provides a complete way of defining many more sparse storage schemes (in the long run, we want to support even dimension level types and properties using the additional extensions proposed in [Chou]).
Note that the current solution of using suffixes for the properties is not ideal, but keeps the extension relatively simple with respect to parsing and printing. Furthermore, it is rather consistent with the TACO implementation which uses things like Compressed-Unique as well. Nevertheless, we probably want to separate dimension level types from properties when we add more types and properties.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132897
show more ...
|
Revision tags: llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2 |
|
#
9921ef73 |
| 02-Aug-2022 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] remove singleton dimension level type (for now)
Although we have plans to support this, and many other, dimension level type(s), currently the tag is not supported. It will be easy to
[mlir][sparse] remove singleton dimension level type (for now)
Although we have plans to support this, and many other, dimension level type(s), currently the tag is not supported. It will be easy to add this back once support is added.
NOTE: based on discussion in https://discourse.llvm.org/t/overcoming-sparsification-limitation-on-level-types/62585
https://github.com/llvm/llvm-project/issues/51658
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D131002
show more ...
|
Revision tags: llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2, llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1 |
|
#
bcfa7bae |
| 09-May-2021 |
Stella Laurenzo <stellaraccident@gmail.com> |
[mlir][CAPI] Add CAPI bindings for the sparse_tensor dialect.
* Adds dialect registration, hand coded 'encoding' attribute and test. * An MLIR CAPI tablegen backend for attributes does not exist, an
[mlir][CAPI] Add CAPI bindings for the sparse_tensor dialect.
* Adds dialect registration, hand coded 'encoding' attribute and test. * An MLIR CAPI tablegen backend for attributes does not exist, and this is a relatively complicated case. I opted to hand code it in a canonical way for now, which will provide a reasonable blueprint for building out the tablegen version in the future. * Also added a (local) CMake function for declaring new CAPI tests, since it was getting repetitive/buggy.
Differential Revision: https://reviews.llvm.org/D102141
show more ...
|