History log of /llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorPasses.cpp (Results 1 – 25 of 101)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-21-init, llvmorg-19.1.7
# 2b5b3cf6 27-Dec-2024 Matthias Springer <me@m-sp.org>

[mlir][sparse_tensor] Migrate `SparseIterationToScf.cpp` to dialect conversion (#121054)

Use the regular dialect conversion driver instead of the 1:N dialect
conversion driver. The 1:N dialect conv

[mlir][sparse_tensor] Migrate `SparseIterationToScf.cpp` to dialect conversion (#121054)

Use the regular dialect conversion driver instead of the 1:N dialect
conversion driver. The 1:N dialect conversion driver will be removed
soon.

show more ...


# 09dfc571 20-Dec-2024 Jacques Pienaar <jpienaar@google.com>

[mlir] Enable decoupling two kinds of greedy behavior. (#104649)

The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracin

[mlir] Enable decoupling two kinds of greedy behavior. (#104649)

The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.

These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.

Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.

For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.

show more ...


Revision tags: llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8
# c42bbda4 12-Jun-2024 Peiming Liu <peiming@google.com>

[mlir][sparse] implement lowering rules for ExtractIterSpaceOp. (#89143)

**DO NOT MERGE** until https://github.com/llvm/llvm-project/pull/89003


Revision tags: llvmorg-18.1.7
# 99835922 28-May-2024 Peiming Liu <peiming@google.com>

[mlir][sparse] remove sparse encoding propagation pass. (#93593)


Revision tags: llvmorg-18.1.6
# ad1083dc 14-May-2024 Peiming Liu <peiming@google.com>

[mlir][sparse] introduce new pass to propagate sparse encodings. (#92052)


Revision tags: llvmorg-18.1.5, llvmorg-18.1.4
# 5122a2c2 11-Apr-2024 Aart Bik <ajcbik@google.com>

[mlir][sparse] allow for direct-out passing of sparse tensor buffers (#88327)

In order to support various external frameworks (JAX vs PyTorch) we need
a bit more flexibility in [dis]assembling exte

[mlir][sparse] allow for direct-out passing of sparse tensor buffers (#88327)

In order to support various external frameworks (JAX vs PyTorch) we need
a bit more flexibility in [dis]assembling external buffers to and from
sparse tensors in MLIR land. This PR adds a direct-out option that
avoids the rigid pre-allocated for copy-out semantics.

Note that over time, we expect the [dis]assemble operations to converge
into something that supports all sorts of external frameworks. Until
then, this option helps in experimenting with different options.

show more ...


# a4c47055 04-Apr-2024 Matthias Springer <me@m-sp.org>

[mlir][linalg] Fix builder API usage in `RegionBuilderHelper` (#87451)

Operations must be created with the supplied builder. Otherwise, the
dialect conversion / greedy pattern rewrite driver can br

[mlir][linalg] Fix builder API usage in `RegionBuilderHelper` (#87451)

Operations must be created with the supplied builder. Otherwise, the
dialect conversion / greedy pattern rewrite driver can break.

This commit fixes a crash in the dialect conversion:
```
within split at llvm-project/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg-invalid.mlir:1 offset :8:8: error: failed to legalize operation 'tosa.add'
%0 = tosa.add %1, %arg2 : (tensor<10x10xf32>, tensor<*xf32>) -> tensor<*xf32>
^
within split at llvm-project/mlir/test/Conversion/TosaToLinalg/tosa-to-linalg-invalid.mlir:1 offset :8:8: note: see current operation: %9 = "tosa.add"(%8, %arg2) : (tensor<10x10xf32>, tensor<*xf32>) -> tensor<*xf32>
mlir-opt: llvm-project/mlir/include/mlir/IR/UseDefLists.h:198: mlir::IRObjectWithUseList<mlir::OpOperand>::~IRObjectWithUseList() [OperandType = mlir::OpOperand]: Assertion `use_empty() && "Cannot destroy a value that still has uses!"' failed.
```

This commit is the proper fix for #87297 (which was reverted).

show more ...


Revision tags: llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2
# 4a653b4d 01-Feb-2024 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] Support pretty print to debug sparse iteration. (#80207)


# 33b463ad 01-Feb-2024 Aart Bik <39774503+aartbik@users.noreply.github.com>

[mlir][sparse] external entry method wrapper for sparse tensors (#80326)

Similar to the emit_c_interface, this pull request adds a pass that
converts public entry methods that use sparse tensors as

[mlir][sparse] external entry method wrapper for sparse tensors (#80326)

Similar to the emit_c_interface, this pull request adds a pass that
converts public entry methods that use sparse tensors as input
parameters and/or output return values into wrapper functions that
[dis]assemble the individual tensors that constitute the actual storage
used externally into MLIR sparse tensors. This pass can be used to
prepare the public entry methods of a program that is compiled by the
MLIR sparsifier to interface with an external runtime, e.g., when
passing sparse tensors as numpy arrays from and to Python. Note that
eventual bufferization decisions (e.g. who [de]allocates the underlying
memory) should be resolved in agreement with the external runtime
(Python, PyTorch, JAX, etc.)

show more ...


Revision tags: llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6
# 5f32bcfb 14-Nov-2023 Aart Bik <39774503+aartbik@users.noreply.github.com>

[mlir][sparse][gpu] re-enable all GPU libgen tests (#72185)

Previous change no longer properly used the GPU libgen pass (even though
most tests still passed falling back to CPU). This revision puts

[mlir][sparse][gpu] re-enable all GPU libgen tests (#72185)

Previous change no longer properly used the GPU libgen pass (even though
most tests still passed falling back to CPU). This revision puts the
proper pass order into place. Also bit of a cleanup of CPU codegen vs.
libgen setup.

show more ...


Revision tags: llvmorg-17.0.5
# 26968554 13-Nov-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] remove filter-loop based algorithm support to handle a… (#71840)

…ffine subscript expressions.


# af8428c0 13-Nov-2023 Aart Bik <39774503+aartbik@users.noreply.github.com>

[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)

Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format)

[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)

Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format) but now at least both
the direct IR and support library path behave exactly the same.

Generalizing the ops is still TBD.

show more ...


# 5ef44679 08-Nov-2023 Aart Bik <39774503+aartbik@users.noreply.github.com>

[mlir][sparse][gpu] cleanup GPUDataTransferStrategy (#71615)

The flag seems to be doing practically the same thing for zero cost and
pinned dma. In addition, the register host is not truly the righ

[mlir][sparse][gpu] cleanup GPUDataTransferStrategy (#71615)

The flag seems to be doing practically the same thing for zero cost and
pinned dma. In addition, the register host is not truly the right zero
cost mechanism according to Thomas. So we are simplifying the setup for
now, until we have a better definition for what to implement and test.

https://github.com/llvm/llvm-project/issues/64316

show more ...


Revision tags: llvmorg-17.0.4
# f82bee13 30-Oct-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] split post-sparsification-rewriting into two passes. (#70727)


# 6a93da99 27-Oct-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] add ReinterpretMapScopeOption for the pass (#70486)


# 7cfac1be 27-Oct-2023 Aart Bik <39774503+aartbik@users.noreply.github.com>

[mlir][sparse] add boilterplate code for a new reintepret map pass (#70393)

The interesting stuff is of course still coming ;-)


Revision tags: llvmorg-17.0.3
# f248d0b2 12-Oct-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] implement sparse_tensor.reorder_coo (#68916)

As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.


# 06374400 06-Oct-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] introduce a pass to stage complex sparse operations in… (#68436)

…to simple steps


# 0083f833 03-Oct-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] renaming sparse_tensor.sort_coo to sparse_tensor.sort (#68161)

Rationale: the operation does not always sort COO tensors (also used for
sparse_tensor.compress for example).


Revision tags: llvmorg-17.0.2
# bfa3bc43 20-Sep-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[mlir][sparse] unifies sparse_tensor.sort_coo/sort into one operation. (#66722)

The use cases of the two operations are largely overlapped, let's
simplify it and only use one of them.


Revision tags: llvmorg-17.0.1, llvmorg-17.0.0
# 098f46dc 13-Sep-2023 Peiming Liu <36770114+PeimingLiu@users.noreply.github.com>

[sparse] allow unpack op to return 0-ranked tensor type. (#66269)

Many frontends canonicalize scalar into 0-ranked tensor, it change will
hopefully make the operation easier to use for those cases.


Revision tags: llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2
# cfa82f77 01-Aug-2023 K-Wu <kunww@google.com>

[mlir][sparse][gpu] introduce flag that controls host to device copy strategies (regular dma default)

Differential Revision: https://reviews.llvm.org/D155352


Revision tags: llvmorg-17.0.0-rc1, llvmorg-18-init
# 4a6b31b8 17-Jul-2023 Alex Zinenko <zinenko@google.com>

[mlir] NFC: untangle SCF Patterns.h and Transforms.h

These two headers both contained a strange mix of definitions related to
both patterns and non-pattern transforms. Put patterns and "populate"
fu

[mlir] NFC: untangle SCF Patterns.h and Transforms.h

These two headers both contained a strange mix of definitions related to
both patterns and non-pattern transforms. Put patterns and "populate"
functions into Patterns.h and standalone transforms into Transforms.h.

Depends On: D155223

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D155454

show more ...


Revision tags: llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4
# ee42e236 12-May-2023 Aart Bik <ajcbik@google.com>

[mlir][sparse][gpu] first implementation of the GPU libgen approach

The sparse compiler now has two prototype strategies for GPU acceleration:

* CUDA codegen: this converts sparsified code to CUDA

[mlir][sparse][gpu] first implementation of the GPU libgen approach

The sparse compiler now has two prototype strategies for GPU acceleration:

* CUDA codegen: this converts sparsified code to CUDA threads
* CUDA libgen: this converts pre-sparsified code to cuSPARSE library calls

This revision introduces the first steps required for the second approach.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D150170

show more ...


Revision tags: llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1
# 19466ebc 03-Apr-2023 Aart Bik <ajcbik@google.com>

[mlir][sparse][gpu] a first prototype sparse GPU code generator

This implements a proof-of-concept GPU code generator
to the sparse compiler pipeline, currently only capable
of generating CUDA threa

[mlir][sparse][gpu] a first prototype sparse GPU code generator

This implements a proof-of-concept GPU code generator
to the sparse compiler pipeline, currently only capable
of generating CUDA threads for outermost parallel loops.

The objective, obviously, is to grow this concept
to a full blown GPU code generator, capable of the
right combinaton of code generation as well as exploiting
idiomatic kernels or vector specific libraries (think cuSparse).

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D147483

show more ...


12345