Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0 |
|
#
11efd1cb |
| 14-Mar-2023 |
Kazu Hirata <kazu@google.com> |
[Analysis] Use *{Set,Map}::contains (NFC)
|
Revision tags: llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2 |
|
#
1b80ccba |
| 06-Feb-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo][regalloc] Handle training case when no regalloc happens.
There's an early-exit case for regalloc when we don't even get a chance to ask for an advisor (priority or eviction), and switch the c
[mlgo][regalloc] Handle training case when no regalloc happens.
There's an early-exit case for regalloc when we don't even get a chance to ask for an advisor (priority or eviction), and switch the context. Then, when we want to log the reward for that function (==the one with the early exit case), we hit the error case where the function's name doesn't match the last-seen context.
There are a few possible fixes, one would be to just switch context when output-ing the reward, which would be correct. This patch opts for the alternative where we check any loging happened in the first place - just to re-validate that no function would have been regaloc-ed without first log-ing its reward.
Differential Revision: https://reviews.llvm.org/D143359
show more ...
|
#
35aa7374 |
| 01-Feb-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Allow logging the spec for the "advice", if needed
This is for the interactive model runner, so it can confirm the tensor spec of the advice with its host.
|
Revision tags: llvmorg-16.0.0-rc1 |
|
#
5b8dc7c8 |
| 26-Jan-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Introduce an "InteractiveModelRunner"
This is a model runner for ML researchers using environments like CompilerGym. In such environments, researchers host the compiler and want to be able to
[mlgo] Introduce an "InteractiveModelRunner"
This is a model runner for ML researchers using environments like CompilerGym. In such environments, researchers host the compiler and want to be able to observe the problem space (features) at each decision step of some optimization pass, at which point the compiler is stopped, waiting for the host makes a decision and provide an advice back to the compiler, which then continues its normal operation, and so on.
The InteractiveModelRunner supports this scenario for the feature set exposed by the compiler at a given time. It uses 2 files - ideally FIFO pipes - one to pass data to the host, the other to get advices back from the host. This means this scenario is supported with no special dependencies. The file creation and deletion is the responsibility of the host. Hooking up this model evaluator to a MLGO-ed pass is the responsibilty of the pass author, and subsequent patches will do so for the current set of mlgo passes, and offer an API to easily "just opt in" by default when mlgo-ing a new pass.
The data protocol is that of the training logger: the host sees a training log doled out observation by observation by reading from one of the files, and passes back its advice as a serialized tensor (i.e. tensor value memory dump) via the other file.
There are some differences wrt the log seen during training: the interactive model doesn't currently include the outcome (because it should be identical to the decision, and it's also not present in the "release" mode); and partial rewards aren't currently communicated back.
The assumption - just like with the training logger - is that the host is co-located, thus avoiding any endianness concerns. In a distributed environment, it is up to the hosting infrastructure to intermediate that.
Differential Revision: https://reviews.llvm.org/D142642
show more ...
|
Revision tags: llvmorg-17-init |
|
#
6d11baf0 |
| 20-Jan-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Stream the training data
This leverages the new logging format in that we don't need to buffer the training data, we can just write it out.
Differential Revision: https://reviews.llvm.org/D1
[mlgo] Stream the training data
This leverages the new logging format in that we don't need to buffer the training data, we can just write it out.
Differential Revision: https://reviews.llvm.org/D142168
show more ...
|
#
9bd69ae8 |
| 17-Jan-2023 |
Mircea Trofin <mtrofin@google.com> |
[nfc][mlgo] Remove abstraction layers for training logger
This follows from D141720
Differential Revision: https://reviews.llvm.org/D141967
|
#
5898be19 |
| 13-Jan-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Remove the protobuf dependency
The dependency was due to the log format. This change switches to the previously-introduced (D139370) "dependency-free" logger instead of the protobuf-based one
[mlgo] Remove the protobuf dependency
The dependency was due to the log format. This change switches to the previously-introduced (D139370) "dependency-free" logger instead of the protobuf-based one.
A subsequent change will clean out the unnecessary abstraction left behind.
This change drops the logger unittest, we have sufficient test coverage via lit tests, and a unit test would require adding, unnecesarily, a log reader (the reader is expected to be python, for the ML side, and there is a reader for that under Analysis/models, used for tests).
Differential Revision: https://reviews.llvm.org/D141720
show more ...
|
Revision tags: llvmorg-15.0.7 |
|
#
edc83a15 |
| 12-Dec-2022 |
Kazu Hirata <kazu@google.com> |
[mlgo] Use LLVM_HAVE_TFLITE instead of LLVM_HAVE_TF_API in C++ code (NFC)
We use LLVM_HAVE_TFLITE as the key to enable the mlgo work these days, and LLVM_HAVE_TF_API is defined whenever LLVM_HAVE_TF
[mlgo] Use LLVM_HAVE_TFLITE instead of LLVM_HAVE_TF_API in C++ code (NFC)
We use LLVM_HAVE_TFLITE as the key to enable the mlgo work these days, and LLVM_HAVE_TF_API is defined whenever LLVM_HAVE_TF_API is defined.
I'm posting this patch because it's purely mechanical.
I'll post a follow-up patch to remove LLVM_HAVE_TF_API in non-C++ files, and that will not be as mechanical as this one.
Differential Revision: https://reviews.llvm.org/D139863
show more ...
|
#
1ee3bb17 |
| 30-Nov-2022 |
Mircea Trofin <mtrofin@google.com> |
[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail
It's an artifact very specific to using TFAgents during training, so it belongs with ModelUnderTrainingRunner.
Differential Revision: h
[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail
It's an artifact very specific to using TFAgents during training, so it belongs with ModelUnderTrainingRunner.
Differential Revision: https://reviews.llvm.org/D139031
show more ...
|
Revision tags: llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2 |
|
#
0cb9746a |
| 03-Aug-2022 |
Mircea Trofin <mtrofin@google.com> |
[nfc][mlgo] Separate logger and training-mode model evaluator
This just shuffles implementations and declarations around. Now the logger and the TF C API-based model evaluator are separate.
Differe
[nfc][mlgo] Separate logger and training-mode model evaluator
This just shuffles implementations and declarations around. Now the logger and the TF C API-based model evaluator are separate.
Differential Revision: https://reviews.llvm.org/D131116
show more ...
|