Home
last modified time | relevance | path

Searched full:tensors (Results 1 – 25 of 248) sorted by relevance

12345678910

/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Utils/
H A DLoopEmitter.h29 // SparseTensorLoopEmiter class, manages sparse tensors and helps to
30 // generate loop structure to (co)-iterate sparse tensors.
57 /// Optional callback function to setup dense output tensors when
68 // subscript expressions on sparse tensors.
82 /// Takes an array of input tensors, which the generated loops will
89 initialize(ValueRange tensors, StringAttr loopTag = nullptr,
95 ValueRange tensors, StringAttr loopTag = nullptr, bool hasOutput = false,
101 /// for iterating over the tensors.
138 // Still need a way to specify the lvl for non-annotated tensors though,
141 /// Emits a co-iteration loop over a set of tensors
392 std::vector<Value> tensors; global() variable
[all...]
H A DLoopEmitter.cpp116 LoopEmitter::LoopEmitter(ValueRange tensors, StringAttr loopTag, bool hasOutput, in LoopEmitter() argument
120 initialize(tensors, loopTag, hasOutput, isSparseOut, numLoops, dimGetter); in LoopEmitter()
136 // tensors array (len == numManifestTensor). in initialize()
137 this->tensors.assign(ts.begin(), ts.end()); in initialize()
165 const Value t = tensors[tid]; in initialize()
166 // a scalar or 0-dimension tensors in initialize()
204 Value tensor = tensors[t]; in makeLevelIterator()
240 const Value tensor = tryFoldTensors(tensors[t]); in initializeLoopEmit()
251 // input tensors. Sparse inputs use sparse primitives to obtain the values. in initializeLoopEmit()
256 // Non-annotated dense tensors in initializeLoopEmit()
[all...]
/llvm-project/mlir/include/mlir/Dialect/SparseTensor/Transforms/
H A DPasses.td15 let summary = "Add [dis]assemble operations on external sparse tensors";
17 Unlike dense tensors, MLIR does **not** provide a direct `_mlir_ciface_`
18 ABI for passing sparse tensors as arguments from and to external methods
19 (within MLIR-generated methods, sparse tensors can be freely passed
24 to obtain a stable `_mlir_ciface_` API for passing sparse tensors
27 The pass converts public entry methods that use sparse tensors as
29 that [dis]assemble the individual tensors that constitute the actual
30 storage used externally into MLIR sparse tensors. This pass can be used
33 sparse tensors as numpy arrays from and to Python. Note that eventual
38 sparse tensors
[all...]
/llvm-project/llvm/include/llvm/Analysis/Utils/
H A DTFUtils.h26 /// for input tensors. The user is responsible for correctly dimensioning the
27 /// input tensors and setting their values before calling evaluate().
30 /// - initialize the input tensors using initInput. Indices must correspond to
34 /// setting internal scalars, for all dimensions (tensors are row-major:
36 /// - call evaluate. The input tensors' values are not consumed after this, and
45 /// tensors, which means that their values need to be used before
/llvm-project/mlir/include/mlir/Dialect/SparseTensor/IR/
H A DSparseTensorType.h40 /// this means that dense-tensors should always return the same answers
41 /// as sparse-tensors with a default encoding. But it additionally means
178 /// Returns true for tensors which have an encoding, and false for
179 /// those which do not. Therefore tensors with an all-dense encoding
183 /// Returns true for tensors where every level is dense.
184 /// (This is always true for dense-tensors.)
187 /// Returns true for tensors where every level is ordered.
188 /// (This is always true for dense-tensors.)
198 /// (This is always true for dense-tensors.)
202 /// (This is always true for dense-tensors
[all...]
H A DSparseTensorBase.td22 tensors types and lower-level operations on the actual sparse storage
42 (MLIR's tensor index notation) where the sparsity of tensors is
49 that all tensors are visited in natural level-coordinate order.
H A DSparseTensorAttrDefs.td119 An attribute to encode information on sparsity properties of tensors, inspired
120 by the TACO formalization of sparse tensors. This encoding is eventually used
124 loops operate on sparse storage formats rather than tensors with a sparsity
205 is useful for binary-valued sparse tensors whose values can either
451 /// `LevelType::Dense` for the null encoding, since dense-tensors
464 /// the null encoding (since dense-tensors are always all-dense).
468 /// the null encoding (since dense-tensors are always all-ordered).
496 /// Also returns true for the null encoding (since dense-tensors
501 /// Also returns true for the null encoding (since dense-tensors
/llvm-project/mlir/docs/Dialects/
H A DTOSA.md91 ### Quantization Parameters in Ops vs Tensors
94 tensors to construct the quantization attributes that sit within the operator.
96 the tensors are no longer necessary for code generation.
98 This enables the tensors to be subsequently interpreted simply as contiguous
105 type information within the tensors; this leaves the choice of how to handle
/llvm-project/mlir/docs/Dialects/Linalg/
H A DOpDSL.md93 tensors. While scalars are inputs only, a tensor may be marked as an output.
110 `ScalarDef`, which specifies the type of the scalar operand. The tensors are
151 at the end of the parameter list after the output tensors.
153 ## Shape-Only Tensors
156 and output tensors. Certain operations need shape-only tensors that are not
159 iteration space of the reduction. As shape-only tensors have no uses, the
320 operands are either scalars or rank zero tensors that are accessed using the
323 `fill` with arbitrary ranked output tensors:
/llvm-project/mlir/test/Interfaces/DestinationStyleOpInterface/
H A Dverify-destination-style-op-interface.mlir11 …+1 {{op expected the number of tensor results (0) to be equal to the number of output tensors (1)}}
26 …+1 {{op expected the number of tensor results (0) to be equal to the number of output tensors (1)}}
34 …+1 {{op expected the number of tensor results (1) to be equal to the number of output tensors (0)}}
49 …+1 {{op expected the number of tensor results (1) to be equal to the number of output tensors (0)}}
/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/
H A DSparseAssembler.cpp26 // Convert type range to new types range, with sparse tensors externalized.
60 // Convert input and output values to [dis]assemble ops for sparse tensors. in convVals()
135 // A rewriting rules that converts public entry methods that use sparse tensors
137 // [dis]assemble the individual tensors that constitute the actual storage used
138 // externally into MLIR sparse tensors before calling the original method.
/llvm-project/mlir/lib/Dialect/Linalg/Transforms/
H A DBufferizableOpInterfaceImpl.cpp26 /// Generic conversion for any DestinationStyleOpInterface on tensors.
39 // Ensure op has only tensors. Allow mixed tensor-buffer mode on a per-need in bufferizeDestinationStyleOpInterface()
128 // All index maps of tensors must be identity maps. in bufferizesToElementwiseAccess()
134 // Non-tensors do not participate in bufferization, so they can be in bufferizesToElementwiseAccess()
/llvm-project/mlir/include/mlir/Dialect/Tosa/IR/
H A DTosaTypesBase.td83 // For weight tensors from tosa::Conv2DOp, tosa::Conv3DOp,
142 // We include unranked tensors as a supported type for all possible tosa
143 // Tensors as unranked does not guarantee invalid. If unranked tensors exist
145 // to not include any remaining unranked tensors.
154 // Ranked tensors up to given rank.
/llvm-project/mlir/test/Integration/Dialect/SparseTensor/CPU/
H A Dreshape_dot.mlir51 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
52 // but requires reallocation on sparse tensors.
67 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
68 // but requires reallocation on sparse tensors.
95 // Note: tensor.collapse_shape is a metadata-only operation on dense tensors
96 // but requires reallocation on sparse tensors.
H A Dsparse_conversion_sparse2sparse.mlir92 // Convert dense tensor directly to various sparse tensors.
110 // Check round-trip equality. And release dense tensors.
147 // Convert dense tensor directly to various sparse tensors.
165 // Check round-trip equality. And release dense tensors.
H A Dsparse_conversion_element.mlir73 // Convert dense tensor directly to various sparse tensors.
94 // Check round-trip equality. And release dense tensors.
102 // Release sparse tensors.
/llvm-project/mlir/docs/Tutorials/transform/
H A DCh0.md154 ## Generic Operation on Tensors
156 …bility of unranked tensors, tensor layouts, and vectors being usable as elemental types of tensors
158 The `linalg.generic` operation from above can lifted to operate on tensors instead of buffers:
236 …the operations do not match. Given an high-level structured operation on tensors, such as `linalg.…
305 This process may result in some elements in the operand tensors being (re)computed on every iterati…
/llvm-project/mlir/lib/ExecutionEngine/
H A DSparseTensorRuntime.cpp10 // manipulating sparse tensors from MLIR. More specifically, it provides
14 // on sparse tensors. However, the provided functionality is **not**
31 // (2) Formidable Repository of Open Sparse Tensors and Tools (FROSTT): *.tns
32 // http://frostt.io/tensors/file-formats.html
37 // tensors. These methods should be used exclusively by MLIR
41 // tensors. These methods can be used by any external runtime that wants
110 // with sparse tensors (which are only visible as opaque pointers externally).
452 // with sparse tensors (which are only visible as opaque pointers externally). in MLIR_SPARSETENSOR_FOREVERY_V()
/llvm-project/mlir/include/mlir/Conversion/TensorToSPIRV/
H A DTensorToSPIRV.h24 /// Note: Normally tensors will be stored in buffers before converting to
26 /// However, SPIR-V supports converting from tensors directly too. This is
/llvm-project/mlir/unittests/Dialect/SparseTensor/
H A DMergerTest.cpp128 tensors.reserve(numTensors); in MergerTestBase()
130 tensors.push_back(merger.addTensorExp(tid(t))); in MergerTestBase()
140 assert(t < tensors.size()); in tensor()
141 return tensors[t]; in tensor()
211 /// - Leaf tensors are equal if they refer to the same tensor.
305 SmallVector<ExprId> tensors; member in __anon93a878f70111::MergerTestBase
312 /// Three tensors (two inputs, one output); and a single loop.
330 /// Four tensors (three inputs, one output); and a single loop.
354 /// Three tensors (two inputs, one output); and a single loop.
376 /// Three tensors (three inputs, one output); and a single loop.
[all …]
/llvm-project/mlir/lib/Interfaces/
H A DDestinationStyleOpInterface.cpp44 // Verify the number of tensor results matches the number of output tensors. in verifyDestinationStyleOpInterface()
48 << ") to be equal to the number of output tensors (" in verifyDestinationStyleOpInterface()
/llvm-project/mlir/docs/Tutorials/Toy/
H A DCh-1.md11 Given that we want to keep things simple, the codegen will be limited to tensors
25 # variables is the way to reshape tensors (element count must match).
37 tensors, but we don't know their dimensions). They are specialized for every
/llvm-project/mlir/include/mlir/Dialect/Linalg/
H A DPasses.td19 This pass only converts ops that operate on ranked tensors. It can be
62 let summary = "Remove unit-extent dimension in Linalg ops on tensors";
74 let summary = "Fuse elementwise operations on tensors";
/llvm-project/mlir/include/mlir/ExecutionEngine/
H A DSparseTensorRuntime.h32 // with sparse tensors (which are only visible as opaque pointers externally).
39 /// tensors into the computation. The types of the `ptr` argument and
147 // with sparse tensors (which are only visible as opaque pointers externally).
/llvm-project/llvm/lib/Analysis/
H A DTFLiteUtils.cpp80 /// The input tensors. We set up the tensors once and just mutate theirs
81 /// scalars before each evaluation. The input tensors keep their value after

12345678910