Home
last modified time | relevance | path

Searched full:ranked (Results 1 – 25 of 157) sorted by relevance

1234567

/llvm-project/mlir/include/mlir/Interfaces/
H A DInferTypeOpInterface.h59 /// Requires: shape is ranked.
63 /// Requires: shape is ranked.
67 /// Requires: shape is ranked.
71 /// Requires: shape is ranked.
80 /// Requires: shape is ranked.
101 /// - A ranked or unranked shape with the dimension specification match those
115 : elementType(elementType), attr(nullptr), ranked(false) {} in ShapedTypeComponents()
117 ranked = shapedType.hasRank(); in ShapedTypeComponents()
119 if (ranked) in ShapedTypeComponents()
123 ranked in ShapedTypeComponents()
[all...]
/llvm-project/mlir/include/mlir/IR/
H A DBuiltinTypeInterfaces.td68 memref type indicating that this type can be used as element of ranked or
95 If the number of dimensions is known, the shape "ranked". The sizes of the
106 If a shape is provided, the resulting type is always ranked, even if this
120 Returns if this type is ranked, i.e. it has a known number of dimensions.
125 Returns the shape of this type if it is ranked, otherwise asserts.
148 /// The returned type is ranked, even if this type is unranked.
154 /// is ranked, even if this type is unranked.
162 /// returned type is ranked if and only if this type is ranked. In that
174 /// If this is a ranked typ
[all...]
H A DBuiltinTypes.h62 /// Returns if this type is ranked, i.e. it has a known number of dimensions.
77 /// The returned type is ranked, even if this type is unranked.
81 /// is ranked, even if this type is unranked.
98 /// This class provides a shared interface for ranked and unranked memref types.
109 /// Returns if this type is ranked, i.e. it has a known number of dimensions.
124 /// The returned type is ranked, even if this type is unranked.
128 /// is ranked, even if this type is unranked.
H A DCommonTypeConstraints.td407 // Whether a shaped type is ranked.
691 // Ranked tensor type whose element type is from the given `allowedTypes` list,
694 string summary = "ranked tensor">
728 "non-0-ranked.tensor">;
736 "non-0-ranked or unranked tensor", "::mlir::TensorType">;
738 // Ranked tensor type with one of the specified types and ranks.
767 // Any ranked memref whose element type is from the given `allowedTypes` list.
774 "non-0-ranked." # MemRefOf<allowedTypes>.summary,
780 // Any memref (ranked or unranked) whose element type is from the given
786 string summary = "ranked o
[all...]
/llvm-project/mlir/lib/Bindings/Python/
H A DIRInterfaces.cpp320 : shape(std::move(shape)), elementType(elementType), ranked(true) {}
324 ranked(true) {} in PyShapedTypeComponents()
328 attribute(other.attribute), ranked(other.ranked) {}
350 "Create a ranked shaped type components object.") in bind()
358 "Create a ranked shaped type components object with attribute.") in bind()
361 [](PyShapedTypeComponents &self) -> bool { return self.ranked; }, in bind()
362 "Returns whether the given shaped type component is ranked.") in bind()
366 if (!self.ranked) { in bind()
371 "Returns the rank of the given ranked shape in bind()
398 bool ranked{false}; global() member in mlir::python::PyShapedTypeComponents
[all...]
H A DIRTypes.cpp524 "Returns whether the given shaped type is ranked."); in get()
531 "Returns the rank of the given ranked shaped type."); in get()
554 "Returns the dim-th dimension of the given ranked shaped type."); in bindDerived()
582 "Returns the shape of the ranked shaped type as a list of integers.");
675 /// Ranked Tensor Type subclass - RankedTensorType.
700 nb::arg("loc").none() = nb::none(), "Create a ranked tensor type"); in bindDerived()
737 /// Ranked MemRef Type subclass - MemRefType. in bindDerived()
/llvm-project/mlir/docs/Traits/
H A DBroadcastable.md67 Given the shapes of two ranked input operands, the result's shape is inferred by equalizing input r…
85ranked operand pair, and updating the inferred shape with each additional ranked operand. If the o…
90 # Filter ranked operands
106 … If, on the contrary, both the result and at least one input operand are ranked, verification cont…
/llvm-project/flang/include/flang/Lower/
H A DVectorSubscripts.h46 /// - an ExtendedValue for ranked base (x%a(i,j)%b)
48 /// scalar subscripts of the ranked array reference (1:foo():1, vector, k)
51 /// path at the right of the ranked array ref (%c%d(m)%e).
128 /// Lowered base of the ranked array ref.
132 /// Scalar subscripts and components at the right of the ranked
/llvm-project/mlir/test/lib/Dialect/Tosa/
H A DTosaTestPasses.cpp46 // skip if input is not ranked tensor type in matchAndRewrite()
50 // skip if it's not ranked tensor type. in matchAndRewrite()
117 // skip if input is not ranked tensor type in matchAndRewrite()
124 // skip if wt is not ranked tensor type in matchAndRewrite()
128 // skip if it's not ranked tensor type. in matchAndRewrite()
/llvm-project/mlir/include/mlir/Dialect/Bufferization/IR/
H A DBufferization.h44 /// Try to cast the given ranked MemRef-typed value to the given ranked MemRef
48 /// E.g., when casting from a ranked MemRef type with dynamic layout to a ranked
/llvm-project/mlir/docs/
H A DTargetLLVMIR.md98 #### Ranked MemRef Types
100 Ranked memref types are converted into an LLVM dialect literal structure type
161 2. a type-erased pointer (`!llvm.ptr`) to a ranked memref descriptor with
165 library functions. The pointer to the ranked memref descriptor points to some
186 - the structs corresponding to `memref` types, both ranked and unranked,
235 // For nD ranked memref descriptors:
363 #### Default Calling Convention for Ranked MemRef
367 [defined above](#ranked-memref-types) before unbundling them into
443 type-erased (`!llvm.ptr`) pointer to the ranked memref descriptor. Note that
446 the ranked memre
[all...]
/llvm-project/mlir/include/mlir/Dialect/Tensor/IR/
H A DTensor.h62 /// Returns true if `target` is a ranked tensor type that preserves static
63 /// information available in the `source` ranked tensor type.
74 /// 1. source and result are ranked tensors with same element type and rank.
94 /// 1. source and result and ranked tensors with same element type and rank.
150 /// Tests if types are the same when ignoring encoding on ranked tensors.
/llvm-project/mlir/lib/Dialect/Tosa/Transforms/
H A DTosaMakeBroadcastable.cpp47 return rewriter.notifyMatchFailure(loc, "input not a ranked tensor"); in reshapeLowerToHigher()
63 // Verify the rank agrees with the output type if the output type is ranked. in reshapeLowerToHigher()
70 loc, "the reshaped type doesn't agrees with the ranked output type"); in reshapeLowerToHigher()
179 return rewriter.notifyMatchFailure(tosaOp, "output not a ranked tensor"); in matchAndRewrite()
/llvm-project/mlir/include/mlir-c/
H A DBuiltinTypes.h283 /// Checks whether the given shaped type is ranked.
286 /// Returns the rank of the given ranked shaped type.
295 /// Returns the dim-th dimension of the given ranked shaped type.
367 // Ranked / Unranked Tensor type.
376 /// Checks whether the given type is a ranked tensor type.
400 /// Gets the 'encoding' attribute from the ranked tensor type, returning a null
414 // Ranked / Unranked MemRef type.
/llvm-project/mlir/include/mlir/Dialect/Mesh/Transforms/
H A DPasses.td39 A fully annotated IR required that all ranked tensor operands, results and
43 `ShardingInterface` interface or all their ranked tensor operands and
/llvm-project/mlir/include/mlir/ExecutionEngine/
H A DCRunnerUtils.h263 /// Iterate over all elements in a 0-ranked strided memref.
284 // There are no indices for a 0-ranked memref, but this API is provided for
302 /// Pointer to the single element in the zero-ranked memref.
372 assert(rank > 0 && "can't make a subscript of a zero ranked array");
383 // order to access the underlying value in case of zero-ranked memref.
385 assert(rank == 0 && "not a zero-ranked memRef");
/llvm-project/mlir/include/mlir/Dialect/Mesh/Interfaces/
H A DShardingInterfaceImpl.h43 // Inserts a clone of the operation that has all ranked tensor
53 // All ranked tensor argument and result dimensions have
/llvm-project/mlir/python/mlir/runtime/
H A Dnp_to_memref.py118 """Returns a ranked memref descriptor for the given numpy array."""
174 """Converts ranked memrefs to numpy arrays."""
/llvm-project/mlir/include/mlir/Dialect/Mesh/IR/
H A DMeshOps.h197 // If ranked tensor type return its sharded counterpart.
199 // If not ranked tensor type return `type`.
/llvm-project/mlir/include/mlir/Dialect/MemRef/IR/
H A DMemRef.h53 /// Return an unranked/ranked tensor type for the given unranked/ranked memref
/llvm-project/flang/docs/
H A DAssumedRank.md266 When the actual argument is ranked, the copy-in/copy-out can be performed on
267 the ranked actual argument where the dynamic type has been aligned with the
295 The difference with the ranked case is that more care is needed to create the
318 1. Actual does not have a descriptor (and is therefore ranked)
320 3. Actual has a ranked descriptor that cannot be forwarded for the dummy
336 For the third case, a new ranked descriptor with the dummy attribute/lower
608 descriptor temporaries is higher for assumed-ranked, it is discussed here.
/llvm-project/mlir/include/mlir/Conversion/LLVMCommon/
H A DMemRefBuilder.h10 // of LLVM dialect structure type that correspond to ranked or unranked memref.
170 /// Builds IR extracting ranked memref descriptor ptr
172 /// Builds IR setting ranked memref descriptor ptr
/llvm-project/flang/test/Lower/
H A Dshape-of-elemental-with-optional-arg.f9018 ! The PRINT statement must be lowered into a ranked print:
/llvm-project/clang-tools-extra/include-cleaner/lib/
H A DFindHeaders.cpp43 llvm::SmallVector<Header> ranked(llvm::SmallVector<Hinted<Header>> Headers) { in ranked()
277 // are already ranked in the stdlib mapping. in headersForSymbol()
291 return ranked(std::move(Headers));
42 llvm::SmallVector<Header> ranked(llvm::SmallVector<Hinted<Header>> Headers) { ranked() function
/llvm-project/mlir/test/mlir-cpu-runner/
H A Dunranked-memref.mlir

1234567