Home
last modified time | relevance | path

Searched refs:tensors (Results 1 – 25 of 128) sorted by relevance

123456

/llvm-project/mlir/test/Interfaces/DestinationStyleOpInterface/
H A Dverify-destination-style-op-interface.mlir11 …+1 {{op expected the number of tensor results (0) to be equal to the number of output tensors (1)}}
26 …+1 {{op expected the number of tensor results (0) to be equal to the number of output tensors (1)}}
34 …+1 {{op expected the number of tensor results (1) to be equal to the number of output tensors (0)}}
49 …+1 {{op expected the number of tensor results (1) to be equal to the number of output tensors (0)}}
/llvm-project/mlir/include/mlir/Dialect/SparseTensor/Transforms/
H A DPasses.td15 let summary = "Add [dis]assemble operations on external sparse tensors";
17 Unlike dense tensors, MLIR does **not** provide a direct `_mlir_ciface_`
18 ABI for passing sparse tensors as arguments from and to external methods
19 (within MLIR-generated methods, sparse tensors can be freely passed
24 to obtain a stable `_mlir_ciface_` API for passing sparse tensors
27 The pass converts public entry methods that use sparse tensors as
29 that [dis]assemble the individual tensors that constitute the actual
30 storage used externally into MLIR sparse tensors. This pass can be used
33 sparse tensors as numpy arrays from and to Python. Note that eventual
38 sparse tensors
[all...]
/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Utils/
H A DLoopEmitter.h29 // SparseTensorLoopEmiter class, manages sparse tensors and helps to
30 // generate loop structure to (co)-iterate sparse tensors.
57 /// Optional callback function to setup dense output tensors when
68 // subscript expressions on sparse tensors.
82 /// Takes an array of input tensors, which the generated loops will
89 initialize(ValueRange tensors, StringAttr loopTag = nullptr,
95 ValueRange tensors, StringAttr loopTag = nullptr, bool hasOutput = false,
101 /// for iterating over the tensors.
138 // Still need a way to specify the lvl for non-annotated tensors though,
141 /// Emits a co-iteration loop over a set of tensors
392 std::vector<Value> tensors; global() variable
[all...]
H A DLoopEmitter.cpp116 LoopEmitter::LoopEmitter(ValueRange tensors, StringAttr loopTag, bool hasOutput, in LoopEmitter() argument
120 initialize(tensors, loopTag, hasOutput, isSparseOut, numLoops, dimGetter); in LoopEmitter()
136 // tensors array (len == numManifestTensor). in initialize()
137 this->tensors.assign(ts.begin(), ts.end()); in initialize()
165 const Value t = tensors[tid]; in initialize()
166 // a scalar or 0-dimension tensors in initialize()
204 Value tensor = tensors[t]; in makeLevelIterator()
240 const Value tensor = tryFoldTensors(tensors[t]); in initializeLoopEmit()
251 // input tensors. Sparse inputs use sparse primitives to obtain the values. in initializeLoopEmit()
256 // Non-annotated dense tensors in initializeLoopEmit()
[all...]
H A DCodegenEnv.cpp77 SmallVector<Value> tensors; // input tensors passed to loop emitter in startEmit() local
79 tensors.push_back(t.get()); in startEmit()
89 tensors, in startEmit()
/llvm-project/mlir/include/mlir/Dialect/Tosa/IR/
H A DTosaTypesBase.td83 // For weight tensors from tosa::Conv2DOp, tosa::Conv3DOp,
142 // We include unranked tensors as a supported type for all possible tosa
143 // Tensors as unranked does not guarantee invalid. If unranked tensors exist
145 // to not include any remaining unranked tensors.
154 // Ranked tensors up to given rank.
/llvm-project/mlir/test/Integration/Dialect/SparseTensor/CPU/
H A Dreshape_dot.mlir51 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
52 // but requires reallocation on sparse tensors.
67 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
68 // but requires reallocation on sparse tensors.
95 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
96 // but requires reallocation on sparse tensors.
H A Dsparse_conversion_sparse2sparse.mlir92 // Convert dense tensor directly to various sparse tensors.
110 // Check round-trip equality. And release dense tensors.
147 // Convert dense tensor directly to various sparse tensors.
165 // Check round-trip equality. And release dense tensors.
H A Dsparse_conversion_element.mlir73 // Convert dense tensor directly to various sparse tensors.
94 // Check round-trip equality. And release dense tensors.
102 // Release sparse tensors.
H A Dsparse_conversion_sparse2dense.mlir59 // Integration test that tests conversions from sparse to dense tensors.
125 // Convert dense tensor directly to various sparse tensors.
184 // Check round-trip equality. And release dense tensors.
217 // Release sparse tensors.
/llvm-project/mlir/docs/Dialects/
H A DTOSA.md94 tensors to construct the quantization attributes that sit within the operator.
96 the tensors are no longer necessary for code generation.
98 This enables the tensors to be subsequently interpreted simply as contiguous
105 type information within the tensors; this leaves the choice of how to handle
/llvm-project/mlir/docs/Dialects/Linalg/
H A DOpDSL.md93 tensors. While scalars are inputs only, a tensor may be marked as an output.
110 `ScalarDef`, which specifies the type of the scalar operand. The tensors are
151 at the end of the parameter list after the output tensors.
156 and output tensors. Certain operations need shape-only tensors that are not
159 iteration space of the reduction. As shape-only tensors have no uses, the
320 operands are either scalars or rank zero tensors that are accessed using the
323 `fill` with arbitrary ranked output tensors:
/llvm-project/mlir/test/Dialect/SCF/
H A Done-shot-bufferize-tensor-copy-insertion.mlir37 // Yield tensors in different order.
61 // Yield tensors in different order.
92 // Yield tensors in different order.
/llvm-project/mlir/test/Integration/data/
H A Dtest.tns6 # see http://frostt.io/tensors/file-formats.html
H A Dmttkrp_b.tns6 # see http://frostt.io/tensors/file-formats.html
/llvm-project/mlir/include/mlir/Dialect/SparseTensor/IR/
H A DSparseTensorBase.td22 tensors types and lower-level operations on the actual sparse storage
42 (MLIR's tensor index notation) where the sparsity of tensors is
49 that all tensors are visited in natural level-coordinate order.
H A DSparseTensorAttrDefs.td119 An attribute to encode information on sparsity properties of tensors, inspired
120 by the TACO formalization of sparse tensors. This encoding is eventually used
124 loops operate on sparse storage formats rather than tensors with a sparsity
205 is useful for binary-valued sparse tensors whose values can either
451 /// `LevelType::Dense` for the null encoding, since dense-tensors
464 /// the null encoding (since dense-tensors are always all-dense).
468 /// the null encoding (since dense-tensors are always all-ordered).
496 /// Also returns true for the null encoding (since dense-tensors
501 /// Also returns true for the null encoding (since dense-tensors
/llvm-project/mlir/test/Dialect/SparseTensor/
H A Dpack_copy.mlir36 // Pack the buffers into a sparse tensors.
76 // Pack the buffers into a sparse tensors.
H A Dunsparsifiable_dense_op.mlir23 // operands are loaded from dense tensors.
61 // operands are loaded from sparse tensors.
H A Ddense.mlir4 // Test to demonstrate the difference between non-annotated dense tensors
5 // and all-dense-annotated "sparse" tensors. The former class remains as
6 // two-dimensional tensors that are bufferized by subsequent passes. The
/llvm-project/mlir/docs/Tutorials/Toy/
H A DCh-1.md11 Given that we want to keep things simple, the codegen will be limited to tensors
25 # variables is the way to reshape tensors (element count must match).
37 tensors, but we don't know their dimensions). They are specialized for every
/llvm-project/mlir/include/mlir/Dialect/Tosa/Transforms/
H A DPasses.td20 let summary = "Fold layerwise operations on constant tensors";
22 Pass that enables folding of full-layer operations on constant tensors.
/llvm-project/mlir/test/Dialect/Arith/
H A Done-shot-bufferize-memory-space-invalid.mlir4 // Selecting tensors with different memory spaces. Such IR cannot be
/llvm-project/mlir/include/mlir/Dialect/MLProgram/Transforms/
H A DPasses.td19 tensors to not be re-read when the value is already known in IR.
/llvm-project/mlir/include/mlir/Dialect/Linalg/
H A DPasses.td19 This pass only converts ops that operate on ranked tensors. It can be
62 let summary = "Remove unit-extent dimension in Linalg ops on tensors";
74 let summary = "Fuse elementwise operations on tensors";

123456