History log of /llvm-project/llvm/unittests/Analysis/TFUtilsTest.cpp (Results 1 – 25 of 25)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7
# 1ee3bb17 30-Nov-2022 Mircea Trofin <mtrofin@google.com>

[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail

It's an artifact very specific to using TFAgents during training, so it
belongs with ModelUnderTrainingRunner.

Differential Revision: h

[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail

It's an artifact very specific to using TFAgents during training, so it
belongs with ModelUnderTrainingRunner.

Differential Revision: https://reviews.llvm.org/D139031

show more ...


Revision tags: llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1
# 1cd45630 19-Sep-2022 Kazu Hirata <kazu@google.com>

[llvm] Use has_value instead of hasValue (NFC)


# ec83c7e3 07-Sep-2022 Aiden Grossman <agrossman154@yahoo.com>

[MLGO] Make TFLiteUtils throw an error if some features haven't been passed to the model

In the Tensorflow C lib utilities, an error gets thrown if some features
haven't gotten passed into the model

[MLGO] Make TFLiteUtils throw an error if some features haven't been passed to the model

In the Tensorflow C lib utilities, an error gets thrown if some features
haven't gotten passed into the model (due to differences in ordering
which now don't exist with the transition to TFLite). However, this is
not currently the case when using TFLiteUtils. This patch makes some
minor changes to throw an error when not all inputs of the model have
been passed, which when not handled will result in a seg fault within
TFLite.

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D133451

show more ...


Revision tags: llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2
# 5ce4c9aa 06-Aug-2022 Mircea Trofin <mtrofin@google.com>

[mlgo] Use TFLite for 'development' mode.

TLite is a lightweight, statically linkable[1], model evaluator, supporting a
subset of what the full tensorflow library does, sufficient for the
types of s

[mlgo] Use TFLite for 'development' mode.

TLite is a lightweight, statically linkable[1], model evaluator, supporting a
subset of what the full tensorflow library does, sufficient for the
types of scenarios we envision having. It is also faster.

We still use saved models as "source of truth" - 'release' mode's AOT
starts from a saved model; and the ML training side operates in terms of
saved models.

Using TFLite solves the following problems compared to using the full TF
C API:

- a compiler-friendly implementation for runtime-loadable (as opposed
to AOT-embedded) models: it's statically linked; it can be built via
cmake;
- solves an issue we had when building the compiler with both AOT and
full TF C API support, whereby, due to a packaging issue on the TF
side, we needed to have the pip package and the TF C API library at
the same version. We have no such constraints now.

The main liability is it supporting a subset of what the full TF
framework does. We do not expect that to cause an issue, but should that
be the case, we can always revert back to using the full framework
(after also figuring out a way to address the problems that motivated
the move to TFLite).

Details:

This change switches the development mode to TFLite. Models are still
expected to be placed in a directory - i.e. the parameters to clang
don't change; what changes is the directory content: we still need
an `output_spec.json` file; but instead of the saved_model protobuf and
the `variables` directory, we now just have one file, `model.tflite`.

The change includes a utility showing how to take a saved model and
convert it to TFLite, which it uses for testing.

The full TF implementation can still be built (not side-by-side). We
intend to remove it shortly, after patching downstream dependencies. The
build behavior, however, prioritizes TFLite - i.e. trying to enable both
full TF C API and TFLite will just pick TFLite.

[1] thanks to @petrhosek's changes to TFLite's cmake support and its deps!

show more ...


# 0cb9746a 03-Aug-2022 Mircea Trofin <mtrofin@google.com>

[nfc][mlgo] Separate logger and training-mode model evaluator

This just shuffles implementations and declarations around. Now the
logger and the TF C API-based model evaluator are separate.

Differe

[nfc][mlgo] Separate logger and training-mode model evaluator

This just shuffles implementations and declarations around. Now the
logger and the TF C API-based model evaluator are separate.

Differential Revision: https://reviews.llvm.org/D131116

show more ...


Revision tags: llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3
# c35ad9ee 27-Apr-2022 Mircea Trofin <mtrofin@google.com>

[mlgo] Support exposing more features than those supported by models

This allows the compiler to support more features than those supported by a
model. The only requirement (development mode only) i

[mlgo] Support exposing more features than those supported by models

This allows the compiler to support more features than those supported by a
model. The only requirement (development mode only) is that the new
features must be appended at the end of the list of features requested
from the model. The support is transparent to compiler code: for
unsupported features, we provide a valid buffer to copy their values;
it's just that this buffer is disconnected from the model, so insofar
as the model is concerned (AOT or development mode), these features don't
exist. The buffers are allocated at setup - meaning, at steady state,
there is no extra allocation (maintaining the current invariant). These
buffers has 2 roles: one, keep the compiler code simple. Second, allow
logging their values in development mode. The latter allows retraining
a model supporting the larger feature set starting from traces produced
with the old model.

For release mode (AOT-ed models), this decouples compiler evolution from
model evolution, which we want in scenarios where the toolchain is
frequently rebuilt and redeployed: we can first deploy the new features,
and continue working with the older model, until a new model is made
available, which can then be picked up the next time the compiler is built.

Differential Revision: https://reviews.llvm.org/D124565

show more ...


Revision tags: llvmorg-14.0.2
# b1fa5ac3 25-Apr-2022 Mircea Trofin <mtrofin@google.com>

[mlgo] Factor out TensorSpec

This is a simple datatype with a few JSON utilities, and is independent
of the underlying executor. The main motivation is to allow taking a
dependency on it on the AOT

[mlgo] Factor out TensorSpec

This is a simple datatype with a few JSON utilities, and is independent
of the underlying executor. The main motivation is to allow taking a
dependency on it on the AOT side, and allow us build a correctly-sized
buffer in the cases when the requested feature isn't supported by the
model. This, in turn, allows us to grow the feature set supported by the
compiler in a backward-compatible way; and also collect traces exposing
the new features, but starting off the older model, and continue
training from those new traces.

Differential Revision: https://reviews.llvm.org/D124417

show more ...


Revision tags: llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2
# 1f5dceb1 11-Jan-2022 Mircea Trofin <mtrofin@google.com>

[MLGO] Add support for multiple training traces per module

This happens in e.g. regalloc, where we trace decisions per function,
but wouldn't want to spew N log files (i.e. one per function). So we

[MLGO] Add support for multiple training traces per module

This happens in e.g. regalloc, where we trace decisions per function,
but wouldn't want to spew N log files (i.e. one per function). So we
output a key-value association, where the key is an ID for the
sub-module object, and the value is the tensorflow::SequenceExample.

The current relation with protobuf is tenuous, so we're avoiding a
custom message type in favor of using the `Struct` message, but that
requires the values be wire-able strings, hence base64 encoding.

We plan on resolving the protobuf situation shortly, and improve the
encoding of such logs, but this is sufficient for now for setting up
regalloc training.

Differential Revision: https://reviews.llvm.org/D116985

show more ...


# b7f298f1 12-Jan-2022 Mircea Trofin <mtrofin@google.com>

[NFC][MLGO] Use ASSERT_TRUE in TFUtilsTest, where appropriate.


Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2
# ae1a2a09 05-Aug-2021 Mircea Trofin <mtrofin@google.com>

[NFC][MLGO] Make logging more robust

1) add some self-diagnosis (when asserts are enabled) to check that all
features have the same nr of entries

2) avoid storing pointers to mutable fields because

[NFC][MLGO] Make logging more robust

1) add some self-diagnosis (when asserts are enabled) to check that all
features have the same nr of entries

2) avoid storing pointers to mutable fields because the proto API
contract doesn't actually guarantee those stay fixed even if no further
mutation of the object occurs.

Differential Revision: https://reviews.llvm.org/D107594

show more ...


Revision tags: llvmorg-13.0.0-rc1, llvmorg-14-init
# 55e12f70 22-Jul-2021 Mircea Trofin <mtrofin@google.com>

[NFC][MLGO] Just use the underlying protobuf object for logging

Avoid buffering just to copy the buffered data, in 'development
mode', when logging. Instead, just populate the underlying protobuf.

[NFC][MLGO] Just use the underlying protobuf object for logging

Avoid buffering just to copy the buffered data, in 'development
mode', when logging. Instead, just populate the underlying protobuf.

Differential Revision: https://reviews.llvm.org/D106592

show more ...


# 55e2d206 14-Jul-2021 Mircea Trofin <mtrofin@google.com>

[MLGO] Use binary protobufs for improved training performance.

It turns out that during training, the time required to parse the
textual protobuf of a training log is about the same as the time it
t

[MLGO] Use binary protobufs for improved training performance.

It turns out that during training, the time required to parse the
textual protobuf of a training log is about the same as the time it
takes to compile the module generating that log. Using binary protobufs
instead elides that cost almost completely.

Differential Revision: https://reviews.llvm.org/D106157

show more ...


Revision tags: llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1
# b51e844f 19-Nov-2020 Mircea Trofin <mtrofin@google.com>

[NFC][TFUtils] Extract out the output spec loader

It's generic for the 'development mode', not specific to the inliner
case.

Differential Revision: https://reviews.llvm.org/D91751


# d454328e 17-Oct-2020 Mircea Trofin <mtrofin@google.com>

[ML] Add final reward logging facility.

Allow logging final rewards. A final reward is logged only once, and is
serialized as all-zero values, except for the last one.

Differential Revision: https:

[ML] Add final reward logging facility.

Allow logging final rewards. A final reward is logged only once, and is
serialized as all-zero values, except for the last one.

Differential Revision: https://reviews.llvm.org/D89626

show more ...


# 57d3e9cd 17-Oct-2020 Mircea Trofin <mtrofin@google.com>

[NFC][ML] Avoid source of some signed/unsigned warnings in TFUtilsTest


Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6
# 36bb1fb1 03-Oct-2020 Mircea Trofin <mtrofin@google.com>

[MLInliner] Factor out logging

Factored out the logging facility, to allow its reuse outside the
inliner.

Differential Revision: https://reviews.llvm.org/D88770


Revision tags: llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3
# 7cfcecec 25-Aug-2020 Mircea Trofin <mtrofin@google.com>

[MLInliner] Simplify TFUTILS_SUPPORTED_TYPES

We only need the C++ type and the corresponding TF Enum. The other
parameter was used for the output spec json file, but we can just
standardize on the C

[MLInliner] Simplify TFUTILS_SUPPORTED_TYPES

We only need the C++ type and the corresponding TF Enum. The other
parameter was used for the output spec json file, but we can just
standardize on the C++ type name there.

Differential Revision: https://reviews.llvm.org/D86549

show more ...


Revision tags: llvmorg-11.0.0-rc2
# ca7973cf 06-Aug-2020 Mircea Trofin <mtrofin@google.com>

[NFC]{MLInliner] Point out the tests' model dependencies


# b18c41c6 05-Aug-2020 Mircea Trofin <mtrofin@google.com>

[TFUtils] Expose untyped accessor to evaluation result tensors

These were implementation detail, but become necessary for generic data
copying.

Also added const variations to them, and move assignm

[TFUtils] Expose untyped accessor to evaluation result tensors

These were implementation detail, but become necessary for generic data
copying.

Also added const variations to them, and move assignment, since we had a
move ctor (and the move assignment helps in a subsequent patch).

Differential Revision: https://reviews.llvm.org/D85262

show more ...


# 90b9c49c 04-Aug-2020 Mircea Trofin <mtrofin@google.com>

[llvm] Expose type and element count-related APIs on TensorSpec

Added a mechanism to check the element type, get the total element
count, and the size of an element.

Differential Revision: https://

[llvm] Expose type and element count-related APIs on TensorSpec

Added a mechanism to check the element type, get the total element
count, and the size of an element.

Differential Revision: https://reviews.llvm.org/D85250

show more ...


# 4b1b109c 30-Jul-2020 Mircea Trofin <mtrofin@google.com>

[llvm] Add a parser from JSON to TensorSpec

A JSON->TensorSpec utility we will use subsequently to specify
additional outputs needed for certain training scenarios.

Differential Revision: https://r

[llvm] Add a parser from JSON to TensorSpec

A JSON->TensorSpec utility we will use subsequently to specify
additional outputs needed for certain training scenarios.

Differential Revision: https://reviews.llvm.org/D84976

show more ...


# 71059257 29-Jul-2020 Mircea Trofin <mtrofin@google.com>

[llvm][NFC] TensorSpec abstraction for ML evaluator

Further abstracting the specification of a tensor, to more easily
support different types and shapes of tensor, and also to perform
initialization

[llvm][NFC] TensorSpec abstraction for ML evaluator

Further abstracting the specification of a tensor, to more easily
support different types and shapes of tensor, and also to perform
initialization up-front, at TFModelEvaluator construction time.

Differential Revision: https://reviews.llvm.org/D84685

show more ...


Revision tags: llvmorg-11.0.0-rc1, llvmorg-12-init
# 4f763b21 15-Jul-2020 Mircea Trofin <mtrofin@google.com>

[llvm][NFC] Hide the tensorflow dependency from headers.

Summary:
This change avoids exposing tensorflow types when including TFUtils.h.
They are just an implementation detail, and don't need to be

[llvm][NFC] Hide the tensorflow dependency from headers.

Summary:
This change avoids exposing tensorflow types when including TFUtils.h.
They are just an implementation detail, and don't need to be used
directly when implementing an analysis requiring ML model evaluation.

The TFUtils APIs, while generically typed, are still not exposed unless
the tensorflow C library is present, as they currently have no use
otherwise.

Reviewers: mehdi_amini, davidxl

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D83843

show more ...


# caf395ee 13-Jul-2020 Mircea Trofin <mtrofin@google.com>

Reapply "[llvm] Native size estimator for training -Oz inliner"

This reverts commit 9908a3b9f521c954cbf6adcec35b14b2f6c8da49.

The fix was to exclude the content of TFUtils.h (automatically
included

Reapply "[llvm] Native size estimator for training -Oz inliner"

This reverts commit 9908a3b9f521c954cbf6adcec35b14b2f6c8da49.

The fix was to exclude the content of TFUtils.h (automatically
included in the LLVM_Analysis module, when LLVM_ENABLE_MODULES is enabled).

Differential Revision: https://reviews.llvm.org/D82817

show more ...


Revision tags: llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3
# 83080a29 29-Jun-2020 Mircea Trofin <mtrofin@google.com>

[llvm] Native size estimator for training -Oz inliner

Summary:
This is an experimental ML-based native size estimator, necessary for
computing partial rewards during -Oz inliner policy training. Dat

[llvm] Native size estimator for training -Oz inliner

Summary:
This is an experimental ML-based native size estimator, necessary for
computing partial rewards during -Oz inliner policy training. Data
extraction for model training will be provided in a separate patch.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.html

Reviewers: davidxl, jdoerfert

Subscribers: mgorny, hiraditya, mgrang, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D82817

show more ...