Lines Matching full:that

10 This document explains that adoption of MLIR to solve graph based problems
13 some points of confusion that keep coming up.
15 One note: even though a major advantage of MLIR is that it can span the full
19 integrating state of the art polyhedral techniques), but issues that touch on
23 This document uses TensorFlow as the example given that it is the focus of our
24 immediate work, but we believe that the same viewpoint could be useful for
25 people working in the context of other ML frameworks that may consider adopting
44 MLIR infrastructure, but that isn't a focus of this doc.))
46 A key observation that MLIR makes is that these subsystems often have two things
55 that describe the set of operations that are legal and supported for a given
56 application. This means that the actual translations between data structures are
73 high, and often very specific to that subsystem. That said, there are several
74 subsystems that are about to get rewritten or substantially revised anyway, so
75 we use those as examples to concretely describe the benefits that MLIR provides
83 1. Grappler is another subsystem that is likely to get substantial revisions in
85 are no known plans to do that work at this point, so we don't discuss it
95 that we can start by putting in a no-op translation to MLIR and back into the
96 pipeline, and verify that nothing breaks. Then we can work on replacing the
98 algorithms that we're planning).
100 This is a development plan, we wouldn't actually ship a TensorFlow that just
111 good sense that we are building towards an improved future that will make
113 of the benefits that MLIR provides, in no particular order:
118 well as [a specification](../LangRef.md) for that format - built just like any
119 other programming language. Important properties of this format are that it is
123 If you haven't worked with a system that works this way, it is hard to overstate
124 how big of a deal this in practice: it means that you can call `foo->dump()` on
131 and implementation for a "verifier" which checks that the IR is well formed. The
132 MLIR verifier is a simple framework that makes it easy to provide a single
136 A verifier pass is sort of like a 'super assertion' that catches mistakes in
143 has simple checks for existing TensorFlow operations, there is a lot that should
153 transformations, for example, this is a simple test that shows "x-x" is being
171 that the output matches the CHECK lines. See the `test/Transforms` directory for
179 allowing us to put in place a culture that expects every behavior changing
190 experience (and fixing mistakes in LLVM), MLIR requires that operations and
191 functions carry abstract location information, that transformations propagate
203 that it is easy to write good tests for this: the testing tools for MLIR capture
205 check that they match the expected diagnostics in the testcase. For example, to
207 simple pass that checks dependencies and emits them as "notes", allowing him to
224 Note that a major limitation of this is that MLIR suffers from a problem of
226 there is nothing that it can do to recover them. There is work underway in
265 written against them, and new tools can be built that inspect and manipulate the
272 One of the challenging things about working with TensorFlow is that there are
273 many invariants and behaviors that need to be preserved and known about when
276 nodes that execute even when passed a dead value, multiple device program
277 representation - etc... all add complexities that can make it challenging to
283 situations and upgrade existing TensorFlow graphs to semantics that are easier
289 decisions of any given dialect are up for it to decide), but each one that works
292 effort moves beyond TF Lite / TOCO support. The discussions that are happening
297 A minor-in-theory, but important-in-practice point is that MLIR is designed to
304 in the graph that is used by the executor, but isn't necessary for program
308 lots of ideas about further improvements in the future, we are happy that MLIR
318 programs. There are other reasons to believe that the MLIR implementations of
323 That said, this is very much a theory at this point. When the new implementation
333 We've heard that at least some people are concerned that MLIR is a "big"
339 1. Like LLVM, MLIR is designed as a set of libraries that clients can link in
343 system. Clients that don't care about XLA don't link in that code, whether
344 they are a TF-Lite system or a client that is completely unrelated to
349 …[memory efficient data structures that the STL does not](http://llvm.org/docs/ProgrammersManual.ht…
351 own subproject in LLVM that the LLVM IR project depends on. This would be
358 MLIR provides a dialect that is an isomorphic 1-1 mapping between TensorFlow
360 only known gap is that a few TF_DataType enums aren't handled yet). MLIR is a
386 * When is it safe and beneficial to perform optimizations that might reduce
390 providing exactly the same abstractions that TensorFlow always has. That said,
397 It is important to point out things that MLIR does not aim to do. For example,
401 Another non-goal is that MLIR currently doesn't support a stable binary