xref: /llvm-project/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td (revision f0f5fdf73d8314eb588802a14c6b96b4311ebde7)
1//===- SparseTensorBase.td - Sparse tensor dialect base ----*- tablegen -*-===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8
9#ifndef SPARSETENSOR_BASE
10#define SPARSETENSOR_BASE
11
12include "mlir/IR/OpBase.td"
13
14def SparseTensor_Dialect : Dialect {
15  let name = "sparse_tensor";
16  let cppNamespace = "::mlir::sparse_tensor";
17  let description = [{
18    The `SparseTensor` dialect supports all the attributes, types,
19    operations, and passes that are required to make sparse tensor
20    types first class citizens within the MLIR compiler infrastructure.
21    The dialect forms a bridge between high-level operations on sparse
22    tensors types and lower-level operations on the actual sparse storage
23    schemes consisting of positions, coordinates, and values. Lower-level
24    support may consist of fully generated code or may be provided by
25    means of a small sparse runtime support library.
26
27    The concept of **treating sparsity as a property, not a tedious
28    implementation detail**, by letting a **sparsifier** generate
29    sparse code automatically was pioneered for linear algebra by [Bik96]
30    in MT1 (see https://www.aartbik.com/sparse.php) and formalized
31    to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor
32    Algebra Compiler (TACO) project (see http://tensor-compiler.org).
33    Please note that we started to prefer the term "sparsifier" over
34    the also commonly used "sparse compiler" terminology to refer to
35    such a pass to make it clear that the sparsifier pass is not a
36    separate compiler, but should be an integral part of any compiler
37    pipeline that is built with the MLIR compiler infrastructure
38
39    The MLIR implementation [Biketal22] closely follows the "sparse
40    iteration theory" that forms the foundation of TACO.  A rewriting
41    rule is applied to each tensor expression in the Linalg dialect
42    (MLIR's tensor index notation) where the sparsity of tensors is
43    indicated using the per-level level-types (e.g., dense, compressed,
44    singleton) together with a specification of the order on the levels
45    (see [Chou18] for an in-depth discussions and possible extensions
46    to these level-types).  Subsequently, a topologically sorted
47    iteration graph, reflecting the required order on coordinates with
48    respect to the levels of each tensor, is constructed to ensure
49    that all tensors are visited in natural level-coordinate order.
50    Next, iteration lattices are constructed for the tensor expression
51    for every index in topological order.  Each iteration lattice point
52    consists of a conjunction of tensor coordinates together with a tensor
53    (sub)expression that needs to be evaluated for that conjunction.
54    Within the lattice, iteration points are ordered according to
55    the way coordinates are exhausted.  As such these iteration
56    lattices drive actual sparse code generation, which consists of
57    a relatively straightforward one-to-one mapping from iteration
58    lattices to combinations of for-loops, while-loops, and if-statements.
59    Sparse tensor outputs that materialize uninitialized are handled with
60    direct insertions if all parallel loops are outermost or insertions
61    that indirectly go through a 1-dimensional access pattern expansion
62    (a.k.a. workspace) where feasible [Gustavson72,Bik96,Kjolstad19].
63
64    * [Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations.
65    PhD thesis, Leiden University, May 1996.
66    * [Biketal22] Aart J.C. Bik, Penporn Koanantakool, Tatiana Shpeisman,
67    Nicolas Vasilache, Bixia Zheng, and Fredrik Kjolstad. Compiler Support
68    for Sparse Tensor Computations in MLIR. ACM Transactions on Architecture
69    and Code Optimization, June, 2022. See: https://dl.acm.org/doi/10.1145/3544559
70    * [Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe.
71    Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of
72    the ACM on Programming Languages, October 2018.
73    * [Chou20] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe.
74    Automatic Generation of Efficient Sparse Tensor Format Conversion Routines.
75    Proceedings of the 41st ACM SIGPLAN Conference on Programming Language
76    Design and Implementation, June, 2020.
77    * [Gustavson72] Fred G. Gustavson. Some basic techniques for solving
78    sparse systems of linear equations. In Sparse Matrices and Their
79    Applications, pages 41–52. Plenum Press, New York, 1972.
80    * [Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David
81    Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of
82    the ACM on Programming Languages, October 2017.
83    * [Kjolstad19] Fredrik Berg Kjolstad, Peter Ahrens, Shoaib Ashraf Kamil,
84    and Saman Amarasinghe. Tensor Algebra Compilation with Workspaces,
85    Proceedings of the IEEE/ACM International Symposium on Code Generation
86    and Optimization, 2019.
87    * [Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation.
88    PhD thesis, MIT, February, 2020.
89  }];
90
91  let useDefaultAttributePrinterParser = 1;
92  let useDefaultTypePrinterParser = 1;
93  let hasConstantMaterializer = 1;
94}
95
96#endif // SPARSETENSOR_BASE
97