Home
last modified time | relevance | path

Searched full:shapes (Results 1 – 25 of 220) sorted by relevance

123456789

/llvm-project/flang/test/Semantics/
H A Dargshape01.f902 ! Detect incompatible argument shapes
88 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
90 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
92 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
94 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
98 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
103 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
105 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
107 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
109 …ble with dummy argument 's=': incompatible dummy argument #1: incompatible dummy data object shapes
[all …]
/llvm-project/mlir/include/mlir/Dialect/
H A DTraits.h31 /// given shapes if they are broadcast compatible. Returns false and clears
36 /// Zip together the dimensions in the two given shapes by prepending the shape
50 /// Returns true if a broadcast between n shapes is guaranteed to be
52 /// shapes are not broadcastable; it might guarantee that they are not
58 /// shapes, getBroadcastedShape would return true and have a result with unknown
60 /// both shapes to have a dimension greater than 1 and different which would
62 bool staticallyKnownBroadcastable(ArrayRef<SmallVector<int64_t, 6>> shapes);
81 /// dimension pair of the operands' shapes should either be the same or one
82 /// of them is one. Also, the results's shapes should have the corresponding
83 /// dimension equal to the larger one, if known. Shapes are checked partially if
/llvm-project/mlir/lib/Conversion/ShapeToStandard/
H A DShapeToStandard.td20 $_builder.getStringAttr("required broadcastable shapes")
23 def CstrBroadcastableToRequire : Pat<(Shape_CstrBroadcastableOp $shapes),
25 (Shape_IsBroadcastableOp $shapes),
29 $_builder.getStringAttr("required equal shapes")
32 def CstrEqToRequire : Pat<(Shape_CstrEqOp $shapes),
33 (Shape_CstrRequireOp (Shape_ShapeEqOp $shapes), (EqStringAttr))>;
/llvm-project/flang/include/flang/Lower/
H A DBoxAnalyzer.h105 llvm::SmallVectorImpl<int64_t> &&shapes) in LBoundsAndShape()
106 : lbounds{std::move(lbounds)}, shapes{std::move(shapes)} {} in LBoundsAndShape()
115 llvm::SmallVector<int64_t> shapes; member
122 llvm::SmallVectorImpl<int64_t> &&shapes) in StaticArray()
123 : ScalarSym{sym}, LBoundsAndShape{std::move(lbounds), std::move(shapes)} { in StaticArray()
162 llvm::SmallVectorImpl<int64_t> &&shapes) in StaticArrayStaticChar()
164 std::move(shapes)} {}
176 llvm::SmallVectorImpl<int64_t> &&shapes) in StaticArrayDynamicChar()
178 std::move(shapes)} {}
181 llvm::SmallVectorImpl<int64_t> &&shapes) in StaticArrayDynamicChar()
[all …]
/llvm-project/mlir/include/mlir/Dialect/Shape/IR/
H A DShapeOps.td60 Returns the broadcasted shape for input shapes or extent tensors. The rest
66 If the two operand shapes are of different rank the smaller one is padded
81 let arguments = (ins Variadic<Shape_ShapeOrExtentTensorType>:$shapes,
88 $shapes attr-dict `:` type($shapes) `->` type($result)
180 let summary = "Returns whether the input shapes or extent tensors are equal";
183 they are equal. When extent tensors are compared to shapes they are
184 regarded as their equivalent non-error shapes. Error shapes can be tested
189 let arguments = (ins Variadic<Shape_ShapeOrExtentTensorType>:$shapes);
[all...]
/llvm-project/mlir/lib/Dialect/
H A DTraits.cpp26 ArrayRef<SmallVector<int64_t, 6>> shapes) { in staticallyKnownBroadcastable() argument
27 assert(!shapes.empty() && "Expected at least one shape"); in staticallyKnownBroadcastable()
28 size_t maxRank = shapes[0].size(); in staticallyKnownBroadcastable()
29 for (size_t i = 1; i != shapes.size(); ++i) in staticallyKnownBroadcastable()
30 maxRank = std::max(maxRank, shapes[i].size()); in staticallyKnownBroadcastable()
32 // We look backwards through every column of `shapes`. in staticallyKnownBroadcastable()
36 for (ArrayRef<int64_t> extent : shapes) { in staticallyKnownBroadcastable()
63 // To compute the result broadcasted shape, we compare operand shapes in getBroadcastedShape()
243 // If all operands are unranked, then all result shapes are possible. in verifyCompatibleOperandBroadcast()
256 return op->emitOpError("operands don't have broadcast-compatible shapes"); in verifyCompatibleOperandBroadcast()
[all...]
/llvm-project/mlir/lib/IR/
H A DTypeUtilities.cpp55 /// Returns success if the given two shapes are compatible. That is, they have
116 /// Returns success if all given types have compatible shapes. That is, they are
117 /// all scalars (not shaped), or they are all shaped types and any ranked shapes
143 // Remove all unranked shapes in verifyCompatibleShapes()
144 auto shapes = llvm::filter_to_vector<8>( in verifyCompatibleShapes() local
146 if (shapes.empty()) in verifyCompatibleShapes()
150 auto firstRank = shapes.front().getRank(); in verifyCompatibleShapes()
151 if (llvm::any_of(shapes, in verifyCompatibleShapes()
159 shapes, [&](auto shape) { return shape.getRank() >= i; }), in verifyCompatibleShapes()
/llvm-project/llvm/test/Transforms/LowerMatrixIntrinsics/
H A Dshape-verification.ll1 ; RUN: not --crash opt -passes='lower-matrix-intrinsics' -verify-matrix-shapes=true -S %s 2>&1 | Fi…
2 ; RUN: opt -passes='lower-matrix-intrinsics' -verify-matrix-shapes=false -S %s 2>&1 | FileCheck --c…
4 ; VERIFY: Conflicting shapes (6x1 vs 1x6)
5 ; NOVERIFY-NOT: Conflicting shapes
/llvm-project/mlir/docs/
H A DShapeInference.md60 runtime and compiler (for constructions of ops/refinement of shapes, reification
68 or bounded shapes at a later point). This allows for decoupling of these:
92 have shapes that are not known statically (for example, `tensor<16x?xf32>`
121 if a compiler only wants to differentiate exact shapes vs dynamic
122 shapes, then it need not consider a more generic shape lattice even
174 especially as for statically known shapes, arbitrary arithmetic
222 1. All static functions are usable for dynamic/unknown shapes;
223 * More involved computations can be performed with statically known shapes
287 is `[n+n, m]` matrix), while some ops only have defined shapes under certain
294 arbitrary computations needed to specify output shapes.
/llvm-project/mlir/lib/Dialect/Shape/IR/
H A DShapeCanonicalization.td26 def CstrBroadcastableEqOps : Pat<(Shape_CstrBroadcastableOp:$op $shapes),
28 [(AllInputShapesEq $shapes)]>;
30 def CstrEqEqOps : Pat<(Shape_CstrEqOp:$op $shapes),
32 [(AllInputShapesEq $shapes)]>;
H A DShape.cpp223 // TODO: Canonicalization should be implemented for shapes that can be
483 // In this example if shapes [0, 1, 2] are broadcastable, then it means that
484 // shapes [0, 1] are broadcastable too, and can be removed from the list of
485 // constraints. If shapes [0, 1, 2] are not broadcastable, then it doesn't
486 // matter if shapes [0, 1] are broadcastable (same for shapes [3, 4, 5]).
508 // Collect shapes checked by `cstr_broadcastable` operands. in matchAndRewrite()
509 SmallVector<std::pair<CstrBroadcastableOp, DenseSet<Value>>> shapes; in matchAndRewrite() local
512 shapes.emplace_back(cstr, std::move(shapesSet)); in matchAndRewrite()
516 llvm::sort(shapes, [](aut in matchAndRewrite()
565 SmallVector<Value, 8> shapes; matchAndRewrite() local
[all...]
/llvm-project/llvm/lib/Target/X86/
H A DX86PreTileConfig.cpp9 /// \file Pass to pre-config the shapes of AMX registers
10 /// AMX register needs to be configured before use. The shapes of AMX register
13 /// The instruction ldtilecfg is used to config the shapes. It must be reachable
14 /// for all variable shapes. ldtilecfg will be inserted more than once if we
144 unsigned Shapes = 0;
146 Shapes = 1;
148 Shapes = 2; in hoistShapesInBB() argument
149 if (!Shapes) in hoistShapesInBB()
152 collectShapeInfo(MI, Shapes); in hoistShapesInBB()
168 void collectShapeInfo(MachineInstr &MI, unsigned Shapes); in hoistShapesInBB()
[all...]
/llvm-project/flang/docs/
H A DPolymorphicEntities.md353 function get_all_area(shapes)
355 type(shape_array) :: shapes(:)
361 do i = 1, size(shapes)
362 get_all_area = get_all_area + shapes(i)%item%get_area()
389 type(shape_array), dimension(2) :: shapes
391 allocate (triangle::shapes(1)%item)
392 allocate (rectangle::shapes(2)%item)
394 do i = 1, size(shapes)
395 call shapes(i)%item%init(i)
398 call set_base_values(shapes(
[all...]
H A DArrayComposition.md144 The shapes of array objects, results of elemental intrinsic functions,
146 But it is possible to determine the shapes of the results of many
190 our implementation of array expressions has decoupled calculation of shapes
/llvm-project/mlir/include/mlir/Dialect/Tosa/Transforms/
H A DPasses.td35 def TosaInferShapes : Pass<"tosa-infer-shapes", "func::FuncOp"> {
36 let summary = "Propagate shapes across TOSA operations";
38 Pass that uses operand types and propagates shapes to TOSA operations.
39 This includes legalizing rankless and dynamic shapes towards static.
/llvm-project/clang-tools-extra/test/clang-doc/Inputs/basic-project/include/
H A DShape.h4 * @brief Abstract base class for shapes.
6 * Provides a common interface for different types of shapes.
/llvm-project/mlir/test/Dialect/
H A Dtraits.mlir71 // expected-error @+1 {{operands don't have broadcast-compatible shapes}}
81 … @+1 {{op result type '4x3x3' not broadcast compatible with broadcasted operands's shapes '4x3x2'}}
91 … {{op result type '8x7x6x1' not broadcast compatible with broadcasted operands's shapes '8x7x6x5'}}
151 …-error @+1 {{op result type '2' not broadcast compatible with broadcasted operands's shapes '3x2'}}
168 …ed-error @+1 {{op result type '5' not broadcast compatible with broadcasted operands's shapes '1'}}
/llvm-project/mlir/docs/Tutorials/Toy/
H A DCh-4.md31 simply propagate the shapes through the computation until they are all known.
33 site could deduce different shapes. One possibility would be to perform symbolic
36 be function specialization, where every call site with new argument shapes
222 casts between two different shapes.
311 simple shape inference pass to propagate shapes intraprocedurally (within a
325 to have their result shapes inferred.
384 to infer their output shapes. The ShapeInferencePass will operate on functions:
427 // Ask the operation to infer its output shapes.
/llvm-project/mlir/lib/Dialect/Tensor/IR/
H A DTensorInferTypeOpInterfaceImpl.cpp102 SmallVector<OpFoldResult> shapes; in getExpandedOutputDimFromInputShape()
105 shapes.push_back(b.getIndexAttr(padOp.getResultType().getDimSize(dim))); in getExpandedOutputDimFromInputShape()
117 shapes.push_back(getValueOrCreateConstantIndexOp( in getExpandedOutputDimFromInputShape()
122 reifiedReturnShapes.emplace_back(std::move(shapes)); in getExpandedOutputShapeFromInputShape()
175 SmallVector<OpFoldResult> shapes; reifyResultShapes() local
/llvm-project/mlir/docs/Traits/
H A DBroadcastable.md15 …shape inference mechanism is able to compute the result shape solely based on input operand shapes.
17 - Input operands have broadcast-compatible shapes, according to the verification rules presented be…
67 Given the shapes of two ranked input operands, the result's shape is inferred by equalizing input r…
182 // tensor<1x3xi32>. Inferred and actual shapes differ in rank.
/llvm-project/mlir/include/mlir/IR/
H A DTypeUtilities.h46 /// Returns success if the given two shapes are compatible. That is, they have
63 /// Returns success if all given types have compatible shapes. That is, they are
64 /// all scalars (not shaped), or they are all shaped types and any ranked shapes
/llvm-project/mlir/test/Conversion/ShapeToStandard/
H A Dconvert-shape-constraints.mlir9 // CHECK: cf.assert %[[BROADCAST_IS_VALID]], "required broadcastable shapes"
22 // CHECK: cf.assert %[[EQUAL_IS_VALID]], "required equal shapes"
/llvm-project/mlir/include/mlir/Dialect/MemRef/Transforms/
H A DPasses.h62 /// `ReifyRankedShapedTypeOpInterface`, in terms of shapes of its input
69 /// in terms of shapes of its input operands.
/llvm-project/mlir/include/mlir/Interfaces/
H A DInferTypeOpInterface.h164 /// Range of values and shapes (corresponding effectively to Shapes dialect's
277 /// these always populates all element types and shapes or fails, but this
/llvm-project/mlir/examples/toy/Ch7/mlir/
H A DShapeInferencePass.cpp10 // propagation of array shapes through function specialization.
83 // Ask the operation to infer its output shapes. in MLIR_DEFINE_EXPLICIT_INTERNAL_INLINE_TYPE_ID()

123456789