History log of /llvm-project/mlir/lib/Dialect/Linalg/Transforms/DataLayoutPropagation.cpp (Results 1 – 25 of 35)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3
# 165f4535 10-Aug-2024 Kazu Hirata <kazu@google.com>

[mlir] Use llvm::is_contained (NFC) (#102714)


# 536486fb 05-Aug-2024 Abhishek Varma <avarma094@gmail.com>

[MLIR][Linalg] Fix DataLayoutPropagation for tensor.unpack + linalg.generic (#101755)

-- While pushing down tensor.unpack through linalg.generic we should
take into account DPS. The current impleme

[MLIR][Linalg] Fix DataLayoutPropagation for tensor.unpack + linalg.generic (#101755)

-- While pushing down tensor.unpack through linalg.generic we should
take into account DPS. The current implementation was enforcing creating
a tensor.empty() for the final output value. This should've just been
the outs operand of the original linalg.generic.
-- This commit thus adds a fix for the same.

Signed-off-by: Abhishek Varma <abhvarma@amd.com>

show more ...


Revision tags: llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init
# 4ad96785 08-Jul-2024 Quinn Dawkins <quinn.dawkins@gmail.com>

[mlir][Linalg] Allow propagation of pack through multi use pad (#98039)

This allows bubbling `tensor.pack` through `tensor.pad` when the pad has
multiple uses. A new pad is created and a `tensor.un

[mlir][Linalg] Allow propagation of pack through multi use pad (#98039)

This allows bubbling `tensor.pack` through `tensor.pad` when the pad has
multiple uses. A new pad is created and a `tensor.unpack` is inserted to
connect the packed pad with the new users.

To keep the previous behavior, the layout propagation control function
can be modified to disallow multi-use propagation.

show more ...


# 04fc471f 08-Jul-2024 Han-Chung Wang <hanhan0912@gmail.com>

[mlir][linalg] Switch to use OpOperand* in ControlPropagationFn. (#96697)

It's not easy to determine whether we want to propagate pack/unpack ops
because we don't know the (producer, consumer) info

[mlir][linalg] Switch to use OpOperand* in ControlPropagationFn. (#96697)

It's not easy to determine whether we want to propagate pack/unpack ops
because we don't know the (producer, consumer) information. The
revisions switch it to `OpOperand*`, so the control function can capture
the (producer, consumer) pair. E.g.,

```
Operation *producer = opOperand->get().getDefiningOp();
Operation *consumer = opOperand->getOwner();
```

show more ...


# 002e8192 27-Jun-2024 yifeizh2 <yifei.zhang@intel.com>

[mlir][linalg] Fix empty outer dim case for packing reshape op (#96732)

This PR fixes the issue reported in
[comment](https://github.com/llvm/llvm-project/pull/93529#discussion_r1653311765).


# a945f55d 18-Jun-2024 Adam Siemieniuk <adam.siemieniuk@intel.com>

[mlir][linalg] Add pattern to bubble-up pack through expand shape op (#93529)

Extends bubble-up pack through reshape pattern to handle pack
propagation through expand shape ops.

---------

Co-

[mlir][linalg] Add pattern to bubble-up pack through expand shape op (#93529)

Extends bubble-up pack through reshape pattern to handle pack
propagation through expand shape ops.

---------

Co-authored-by: Prashant Kumar <pk5561@gmail.com>

show more ...


Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5
# d2353695 30-Apr-2024 Peiming Liu <peiming@google.com>

[mlir][NFC] update code to use `mlir::dyn_cast/cast/isa` (#90633)

Fix compiler warning caused by using deprecated interface
(https://github.com/llvm/llvm-project/pull/90413)


# 97069a86 30-Apr-2024 Gaurav Shukla <gaurav@nod-labs.com>

[MLIR] Generalize expand_shape to take shape as explicit input (#90040)

This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This

[MLIR] Generalize expand_shape to take shape as explicit input (#90040)

This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

---------

Signed-off-by: Gaurav Shukla<gaurav.shukla@amd.com>
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>

show more ...


# 8c0341df 21-Apr-2024 Mehdi Amini <joker.eph@gmail.com>

Revert "[MLIR] Generalize expand_shape to take shape as explicit input" (#89540)

Reverts llvm/llvm-project#69267

this broke some bots.


# e095d978 21-Apr-2024 Gaurav Shukla <gaurav@nod-labs.com>

[MLIR] Generalize expand_shape to take shape as explicit input (#69267)

This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This

[MLIR] Generalize expand_shape to take shape as explicit input (#69267)

This patch generalizes tensor.expand_shape and memref.expand_shape to
consume the output shape as a list of SSA values. This enables us to
implement generic reshape operations with dynamic shapes using
collapse_shape/expand_shape pairs.

The output_shape input to expand_shape follows the static/dynamic
representation that's also used in `tensor.extract_slice`.

Differential Revision: https://reviews.llvm.org/D140821

Co-authored-by: Ramiro Leal-Cavazos <ramiroleal050@gmail.com>

show more ...


Revision tags: llvmorg-18.1.4, llvmorg-18.1.3
# 0c1c0d53 28-Mar-2024 Jerry Wu <cheyuw@google.com>

[MLIR] Add patterns to bubble-up pack and push-down unpack through collapse/expand shape ops (#85297)

Add DataLayoutPropagation patterns to bubble-up pack and push-down
unpack through collapse/expa

[MLIR] Add patterns to bubble-up pack and push-down unpack through collapse/expand shape ops (#85297)

Add DataLayoutPropagation patterns to bubble-up pack and push-down
unpack through collapse/expand shape ops.

---------

Co-authored-by: Quinn Dawkins <quinn.dawkins@gmail.com>

show more ...


Revision tags: llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3
# 886294a2 18-Feb-2024 Quinn Dawkins <quinn.dawkins@gmail.com>

[mlir][linalg] Add pattern to propagate pack up through tensor.pad (#82035)

This mirrors the existing pattern for pushing unpack down through
padding, restricting to cases where the padded dimensio

[mlir][linalg] Add pattern to propagate pack up through tensor.pad (#82035)

This mirrors the existing pattern for pushing unpack down through
padding, restricting to cases where the padded dimensions aren't tiled
by the pack.

Additionally reformats the propagation test to make it easier to read.

show more ...


Revision tags: llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4
# 3b61f5a1 20-Oct-2023 Mehdi Amini <joker.eph@gmail.com>

Apply clang-tidy fixes for performance-unnecessary-value-param in DataLayoutPropagation.cpp (NFC)


# 1609f1c2 14-Nov-2023 long.chen <lipracer@gmail.com>

[mlir][affine][nfc] cleanup deprecated T.cast style functions (#71269)

detail see the docment: https://mlir.llvm.org/deprecation/

Not all changes are made manually, most of them are made through

[mlir][affine][nfc] cleanup deprecated T.cast style functions (#71269)

detail see the docment: https://mlir.llvm.org/deprecation/

Not all changes are made manually, most of them are made through a clang
tool I wrote https://github.com/lipracer/cpp-refactor.

show more ...


Revision tags: llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0
# fd8349cd 11-Sep-2023 MaheshRavishankar <1663364+MaheshRavishankar@users.noreply.github.com>

[mlir][Linalg] Move `linalg.fill` -> `linalg.pack` pattern into `fill` canonicalization patterns. (#66002)

This pattern fits better with the other canonicalization patterns that
exist for `linalg.f

[mlir][Linalg] Move `linalg.fill` -> `linalg.pack` pattern into `fill` canonicalization patterns. (#66002)

This pattern fits better with the other canonicalization patterns that
exist for `linalg.fill`.

show more ...


Revision tags: llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4
# 5550c821 08-May-2023 Tres Popp <tpopp@google.com>

[mlir] Move casting calls from methods to function calls

The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionali

[mlir] Move casting calls from methods to function calls

The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
additional check:
https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
them to a pure state.
4. Some changes have been deleted for the following reasons:
- Some files had a variable also named cast
- Some files had not included a header file that defines the cast
functions
- Some files are definitions of the classes that have the casting
methods, so the code still refers to the method instead of the
function without adding a prefix or removing the method declaration
at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
-header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
mlir/lib/**/IR/\
mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
mlir/test/lib/Dialect/Test/TestTypes.cpp\
mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
mlir/test/lib/Dialect/Test/TestAttributes.cpp\
mlir/unittests/TableGen/EnumsGenTest.cpp\
mlir/test/python/lib/PythonTestCAPI.cpp\
mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123

show more ...


# 9f242404 05-May-2023 Lorenzo Chelini <l.chelini@icloud.com>

[MLIR][Linalg] Rename `packElementWiseOp` to `packGenericOp` (NFC)

Commit b4563ee17ce45728a323c2708e549627b0a8ee9c enabled propagation for
pack and unpack through non-elementwise operations, update

[MLIR][Linalg] Rename `packElementWiseOp` to `packGenericOp` (NFC)

Commit b4563ee17ce45728a323c2708e549627b0a8ee9c enabled propagation for
pack and unpack through non-elementwise operations, update comments and
methods names to reflect the changes made.

Rework some tests where the `linalg.generic` was reading from
`tensor.empty`, which is undefined behaviour.

Reviewed By: hanchung, qedawkins

Differential Revision: https://reviews.llvm.org/D149952

show more ...


# 8eed9f38 05-May-2023 Hanhan Wang <hanchung@google.com>

[mlir][linalg] Add support for folding pack(fill) into fill.

Reviewed By: qedawkins

Differential Revision: https://reviews.llvm.org/D149801


Revision tags: llvmorg-16.0.3
# 61be9358 26-Apr-2023 Lorenzo Chelini <l.chelini@icloud.com>

[MLIR][Linalg] Change destination logic in `bubbleUpPackOpThroughGenericOp`.

In `bubbleUpPackOpThroughGenericOp`, we replaced the init operands with
a new `tensor.empty` if the operation was a pure

[MLIR][Linalg] Change destination logic in `bubbleUpPackOpThroughGenericOp`.

In `bubbleUpPackOpThroughGenericOp`, we replaced the init operands with
a new `tensor.empty` if the operation was a pure element-wise op. This
behaviour is not wrong but not ideal because we "break" the original
use-def-chain of the output operand by materializing a new
`tensor.empty`. We should use `tensor.empty` as a destination *only* if the
initial init operand was already a `tensor.empty`, as we do in
`PushDownUnpack`.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D149250

show more ...


Revision tags: llvmorg-16.0.2, llvmorg-16.0.1
# b4563ee1 01-Apr-2023 Quinn Dawkins <quinn@nod-labs.com>

[mlir][linalg] Enable propagation of pack/unpack ops through non-elementwise

Allows pack propagation through non-elementwise generics as long as all
tiled dimensions have parallel iterator types and

[mlir][linalg] Enable propagation of pack/unpack ops through non-elementwise

Allows pack propagation through non-elementwise generics as long as all
tiled dimensions have parallel iterator types and are only indexed with
affine dim expressions by any of the operands.

This enables unpack propagation cases where the result type is different
from the current unpack destination tensor and thus motivates a similar
helper as the for pack for creating a destination tensor based on
pack information.

Outer dim permutations are allowed to permute reduction dims, however
remains unsupported for non-affine dim indexing map results.
Additionally ops with gather semantics now explicitly prohibit propagation.

Pack/unpack propagation through reductions may not always be beneficial
so user control over propagation decisions is made available through
a control function similar to the one for fusion.

Differential Revision: https://reviews.llvm.org/D147508

show more ...


Revision tags: llvmorg-16.0.0, llvmorg-16.0.0-rc4
# 3cf42c3f 06-Mar-2023 Adrian Kuegel <akuegel@google.com>

[mlir] Apply ClangTidy readability finding (NFC)


# 5885c85f 28-Feb-2023 Lorenzo Chelini <l.chelini@icloud.com>

[MLIR][Linalg] Fix propagation for rank-zero tensor

`isScalar` only returns true if the operand is non-shaped.
But we need to handle also rank zero tensors.

Reviewed By: hanchung

Differential Revi

[MLIR][Linalg] Fix propagation for rank-zero tensor

`isScalar` only returns true if the operand is non-shaped.
But we need to handle also rank zero tensors.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D144989

show more ...


Revision tags: llvmorg-16.0.0-rc3
# 1c228026 17-Feb-2023 Lorenzo Chelini <l.chelini@icloud.com>

[MLIR][Linalg] Change insertion point for `bubbleUpPackOpThroughElemGenericOp`

Currently, the insertion point for `bubbleUpPackOpThroughElemGenericOp`
is after the tensor.pack this means that the ne

[MLIR][Linalg] Change insertion point for `bubbleUpPackOpThroughElemGenericOp`

Currently, the insertion point for `bubbleUpPackOpThroughElemGenericOp`
is after the tensor.pack this means that the new generic will be created
right after the tensor.pack. This is inconvenient because we are moving
the position of the generic; the idea is to move pack/unpack around, not
linalg.generics. This PR changes the insertion point to preserve the
position of the generic.

Additionally, it restricts the pattern to fire if the generic has a
single user (`tensor.pack`) to avoid introducing recomputation.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D144246

show more ...


# 5f2618fe 21-Feb-2023 Quinn Dawkins <quinn@nod-labs.com>

[mlir][linalg] Allow constant exprs in pack/unpack propagation through elementwise

The pack/unpack propagation patterns currently assume all map results
for non-scalar arguments are AffineDimExprs,

[mlir][linalg] Allow constant exprs in pack/unpack propagation through elementwise

The pack/unpack propagation patterns currently assume all map results
for non-scalar arguments are AffineDimExprs, leading to crashes when the
input operand being packed has constant expressions.

Differential Revision: https://reviews.llvm.org/D144443

show more ...


# 21e6e70c 20-Feb-2023 Quinn Dawkins <quinn@nod-labs.com>

[mlir][linalg] Match element type of result when doing propagation of unpack through elementwise

When propagating tensor.unpack ops through elementwise generics, a new
output tensor is needed if the

[mlir][linalg] Match element type of result when doing propagation of unpack through elementwise

When propagating tensor.unpack ops through elementwise generics, a new
output tensor is needed if the element type of the input differs from
that of the output in the elementwise op.

Differential Revision: https://reviews.llvm.org/D144438

show more ...


12