Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2 |
|
#
206fad0e |
| 05-Oct-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][NFC] Mark type converter in `populate...` functions as `const` (#111250)
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.
Patterns
[mlir][NFC] Mark type converter in `populate...` functions as `const` (#111250)
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.
Patterns already take a `const` type converter. However, some
`populate...` functions do not only add new patterns, but also add
additional type conversion rules. That makes it difficult to find the
place where a type conversion was added in the code base. With this
change, all `populate...` functions that only populate pattern now have
a `const` type converter. Programmers can then conclude from the
function signature that these functions do not register any new type
conversion rules.
Also some minor cleanups around the 1:N dialect conversion
infrastructure, which did not always pass the type converter as a
`const` object internally.
show more ...
|
Revision tags: llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4 |
|
#
5122a2c2 |
| 11-Apr-2024 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] allow for direct-out passing of sparse tensor buffers (#88327)
In order to support various external frameworks (JAX vs PyTorch) we need
a bit more flexibility in [dis]assembling exte
[mlir][sparse] allow for direct-out passing of sparse tensor buffers (#88327)
In order to support various external frameworks (JAX vs PyTorch) we need
a bit more flexibility in [dis]assembling external buffers to and from
sparse tensors in MLIR land. This PR adds a direct-out option that
avoids the rigid pre-allocated for copy-out semantics.
Note that over time, we expect the [dis]assemble operations to converge
into something that supports all sorts of external frameworks. Until
then, this option helps in experimenting with different options.
show more ...
|
Revision tags: llvmorg-18.1.3 |
|
#
dc4cfdbb |
| 29-Mar-2024 |
Aart Bik <ajcbik@google.com> |
[mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct code
[mlir][sparse] provide an AoS "view" into sparse runtime support lib (#87116)
Note that even though the sparse runtime support lib always uses SoA
storage for COO storage (and provides correct codegen by means of views
into this storage), in some rare cases we need the true physical SoA
storage as a coordinate buffer. This PR provides that functionality by
means of a (costly) coordinate buffer call.
Since this is currently only used for testing/debugging by means of the
sparse_tensor.print method, this solution is acceptable. If we ever want
a performing version of this, we should truly support AoS storage of COO
in addition to the SoA used right now.
show more ...
|
Revision tags: llvmorg-18.1.2 |
|
#
e8e8df4c |
| 15-Mar-2024 |
Matthias Springer <me@m-sp.org> |
[mlir][sparse] Add `has_runtime_library` test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime librar
[mlir][sparse] Add `has_runtime_library` test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.
This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.
This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.
Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
show more ...
|
#
94e27c26 |
| 12-Mar-2024 |
Peiming Liu <peiming@google.com> |
[mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
|
Revision tags: llvmorg-18.1.1 |
|
#
fc9f1d49 |
| 06-Mar-2024 |
Peiming Liu <peiming@google.com> |
[mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
|
Revision tags: llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init |
|
#
365777ec |
| 12-Dec-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for the build (as
well as a bit of cleanup of that file to be less verbose and more
flexible).
show more ...
|
Revision tags: llvmorg-17.0.6 |
|
#
1944c4f7 |
| 27-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
show more ...
|
#
1dd387e1 |
| 22-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, sinc
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
show more ...
|
#
2323f48e |
| 16-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] refactor dim2lvl/lvl2dim lvlsizes setup (#72474)
This change provides access to the individual components of dim sizes
and lvl sizes after each codegenutil call.
This is step 2 ou
[mlir][sparse] refactor dim2lvl/lvl2dim lvlsizes setup (#72474)
This change provides access to the individual components of dim sizes
and lvl sizes after each codegenutil call.
This is step 2 out of 3 to make sparse_tensor.new work for BSR
show more ...
|
Revision tags: llvmorg-17.0.5 |
|
#
af8428c0 |
| 13-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)
Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format)
[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)
Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format) but now at least both
the direct IR and support library path behave exactly the same.
Generalizing the ops is still TBD.
show more ...
|
#
160d483b |
| 07-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] implement loose-compressed/2:4 on direct IR codegen path (#71461)
Fills in the missing cases for direct IR codegen.
Note that non-permutation handling is still TBD.
|
#
22212ca7 |
| 02-Nov-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper met
[mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
show more ...
|
Revision tags: llvmorg-17.0.4 |
|
#
dcae289d |
| 31-Oct-2023 |
Christian Ulmann <christian.ulmann@nextsilicon.com> |
[MLIR][SparseTensor] Introduce opaque pointers in LLVM dialect lowering (#70570)
This commit changes the SparseTensor LLVM dialect lowering from using
`llvm.ptr<i8>` to `llvm.ptr`. This change ensu
[MLIR][SparseTensor] Introduce opaque pointers in LLVM dialect lowering (#70570)
This commit changes the SparseTensor LLVM dialect lowering from using
`llvm.ptr<i8>` to `llvm.ptr`. This change ensures that the lowering now
properly relies on opaque pointers, instead of working with already type
erased i8 pointers.
show more ...
|
#
7d608ee2 |
| 27-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] unify sparse_tensor.out rewriting rules (#70518)
|
#
ef222988 |
| 26-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] implements sparse_tensor.reinterpret_map (#70388)
|
#
c780352d |
| 24-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] implement sparse_tensor.lvl operation. (#69993)
|
#
43961264 |
| 24-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] hoists alloca outside the outermost loop. (#70085)
|
#
6243d7d2 |
| 20-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] fix stack overflow due to memref.alloca in loops (#69786)
|
Revision tags: llvmorg-17.0.3 |
|
#
d392073f |
| 16-Oct-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] simplify reader construction of new sparse tensor (#69036)
Making the materialize-from-reader method part of the Swiss army knife
suite again removes a lot of redundant boiler plate
[mlir][sparse] simplify reader construction of new sparse tensor (#69036)
Making the materialize-from-reader method part of the Swiss army knife
suite again removes a lot of redundant boiler plate code and unifies the
parameter setup into a single centralized utility. Furthermore, we now
have minimized the number of entry points into the library that need a
non-permutation map setup, simplifying what comes next
show more ...
|
#
bbecd422 |
| 13-Oct-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] cleanup sparse tensor materialization parameter setup (#68956)
|
#
2045cca0 |
| 13-Oct-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] add a forwarding insertion to SparseTensorStorage (#68939)
|
#
fbe47bf5 |
| 13-Oct-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] remove dead code from utils (#68943)
|
#
f248d0b2 |
| 12-Oct-2023 |
Peiming Liu <36770114+PeimingLiu@users.noreply.github.com> |
[mlir][sparse] implement sparse_tensor.reorder_coo (#68916)
As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.
|
#
d5622dec |
| 09-Oct-2023 |
Aart Bik <39774503+aartbik@users.noreply.github.com> |
[mlir][sparse] rename map utility (#68611)
Rename util genReaderBuffers -> genMapBuffers since it is no longer
specific to the reader, but all MapRef data in general.
|