Lines Matching full:are
6 concern for others who are more concerned about the "here and now" - why does it
20 MLIR's relationship to XLA, Eigen, etc, are out of scope for this particular
34 graph rewriting, and others - which are independent of the representational
40 TensorFlow has numerous subsystems (some of which are proprietary, e.g.
47 going on: they are both particular data structures and encodings (e.g. HLO
55 that describe the set of operations that are legal and supported for a given
56 application. This means that the actual translations between data structures are
57 kept as simple as possible - and are thus relatively easy to make "correct".
73 high, and often very specific to that subsystem. That said, there are several
74 subsystems that are about to get rewritten or substantially revised anyway, so
76 in these cases and what it will take. The subsystems discussed are:
85 are no known plans to do that work at this point, so we don't discuss it
109 immediate term - we have two implementations of the same functionality, we are
111 good sense that we are building towards an improved future that will make
119 other programming language. Important properties of this format are that it is
148 There are many aspects of this in MLIR, but we'll focus on compiler
149 transformations since they are the easiest to understand. Compiler
150 transformations are modeled as subclasses of the `Pass` C++ class, which are
167 The "CHECK" comments are interpreted by the
181 reliable over time, since they are testing exactly what they are supposed to.
182 End to end integration tests are still super useful for some things of course!
225 "garbage in, garbage out": if the input locations to MLIR are imprecise, then
263 Declarative pattern rules are preferable to imperative C++ code for a number of
264 reasons: they are more compact, easier to reason about, can have checkers
272 One of the challenging things about working with TensorFlow is that there are
283 situations and upgrade existing TensorFlow graphs to semantics that are easier
284 to reason about. The solutions to these problems are all still being debated,
287 using futures/async semantics etc. None of these particular battles are critical
289 decisions of any given dialect are up for it to decide), but each one that works
292 effort moves beyond TF Lite / TOCO support. The discussions that are happening
293 now are super valuable and making progress.
307 TensorFlow has made a lot of progress in this area over the years, and there are
308 lots of ideas about further improvements in the future, we are happy that MLIR
310 transformations) today, and are committed to pushing hard to make it better.
318 programs. There are other reasons to believe that the MLIR implementations of
324 of various subsystems are available, we will see what happens in practice: there
333 We've heard that at least some people are concerned that MLIR is a "big"
334 dependency to take on, and could result in large code size. Here are some key
344 they are a TF-Lite system or a client that is completely unrelated to
363 backwards compatible way (even if there are some historical warts!) is critical.
365 In *addition* to the isomorphic mapping, we are actively working on efforts to
373 These discussions occasionally cause confusion because there are several issues
376 * What are the current semantics of TensorFlow graphs, and what invariants can
384 * How should MLIR represent async vs sync operations, what invariants are
391 we are trying hard to level-up the representation (taking advantage of the