Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4 |
|
#
89e6a288 |
| 30-Aug-2024 |
Daniil Fukalov <dfukalov@gmail.com> |
[NFC] Add explicit #include llvm-config.h where its macros are used. (#106621)
Without these explicit includes, removing other headers, who implicitly
include llvm-config.h, may have non-trivial si
[NFC] Add explicit #include llvm-config.h where its macros are used. (#106621)
Without these explicit includes, removing other headers, who implicitly
include llvm-config.h, may have non-trivial side effects.
show more ...
|
Revision tags: llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init |
|
#
13be6ee7 |
| 02-Jul-2024 |
Simon Pilgrim <llvm-dev@redking.me.uk> |
Fix MSVC discarded return value warnings. NFC.
"C4858 This function constructs an object wrapped by a smart pointer and has no other effects; it is not useful to call this function and discard the r
Fix MSVC discarded return value warnings. NFC.
"C4858 This function constructs an object wrapped by a smart pointer and has no other effects; it is not useful to call this function and discard the return value."
show more ...
|
#
3a462d89 |
| 26-Jun-2024 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] drop the prefix `_` in `_model_selector`
`_` upsets the saved model freezer (assumptions about python naming).
|
#
313b1a82 |
| 24-Jun-2024 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Support composite AOT-ed models (#96276)
This applies to the AOT case where we embed models in the compiler. The
change adds support for multiple models for the same agent, and allows
the u
[mlgo] Support composite AOT-ed models (#96276)
This applies to the AOT case where we embed models in the compiler. The
change adds support for multiple models for the same agent, and allows
the user select one via a command line flag. "agent" refers to e.g. the
inline advisor or the register allocator eviction advisor.
To avoid build setup complexity, the support is delegated to the saved
model. Since saved models define computational graphs, we can generate a
composite model (this happens prior to building and embedding it in LLVM
and is not shown in this change) that exposes an extra feature with a
predefined name: `_model_selector`. The model, then, delegates
internally to contained models based on that feature value.
Model selection is expected to happen at model instantiation, there is
no current scenario for switching them afterwards.
If the model doesn't expose such a feature but the user passes one, we
report error.
If the model exposes such a feature but the user doesn't pass one, we
also report an error.
Invalid model selector values are expected to be handled by the saved
model.
Internally, the model uses a pair of uint64 values - the high and low of
the MD5 hash of the name.
A tool composing models would, then, need to:
- expose the extra feature, `_model_selector`, shape (2,), uint64 data
type
- test its value (`tf.cond` or `tf.case` in Tensorflow) against the MD5
hash, in the [high, low] order, of contained models based on a
user-specified name (which the user will then use as flag value to the
compiler)
Agents just need to add a flag to capture the name of a model and pass
it to `ReleaseModeModelRunner` at construction. This can be passed in
all cases without checking - the case where the model is not composite
and we pass an empty name, everything works as before.
This change also factors out the string flags we pass to the
`ReleaseModeModelRunner` for better maintainability (we risk confusing
parameters that are strings otherwise)
show more ...
|
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2 |
|
#
795910c2 |
| 02-Feb-2023 |
Mircea Trofin <mtrofin@google.com> |
Fix windows bot breakages due to D143110
|
#
83051c5a |
| 01-Feb-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Make InteractiveModelRunner actually work with named pipes
Turns out raw_fd_stream doesn't work with named pipes, so we just need to lower the abstraction. Updated the unittest accordingly. B
[mlgo] Make InteractiveModelRunner actually work with named pipes
Turns out raw_fd_stream doesn't work with named pipes, so we just need to lower the abstraction. Updated the unittest accordingly. Because mkfifo's path argument requires a certain naming pattern on Windows (IIUC), restricted the test to Linux only.
Differential Revision: https://reviews.llvm.org/D143110
show more ...
|
#
35aa7374 |
| 01-Feb-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Allow logging the spec for the "advice", if needed
This is for the interactive model runner, so it can confirm the tensor spec of the advice with its host.
|
Revision tags: llvmorg-16.0.0-rc1 |
|
#
5b8dc7c8 |
| 26-Jan-2023 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Introduce an "InteractiveModelRunner"
This is a model runner for ML researchers using environments like CompilerGym. In such environments, researchers host the compiler and want to be able to
[mlgo] Introduce an "InteractiveModelRunner"
This is a model runner for ML researchers using environments like CompilerGym. In such environments, researchers host the compiler and want to be able to observe the problem space (features) at each decision step of some optimization pass, at which point the compiler is stopped, waiting for the host makes a decision and provide an advice back to the compiler, which then continues its normal operation, and so on.
The InteractiveModelRunner supports this scenario for the feature set exposed by the compiler at a given time. It uses 2 files - ideally FIFO pipes - one to pass data to the host, the other to get advices back from the host. This means this scenario is supported with no special dependencies. The file creation and deletion is the responsibility of the host. Hooking up this model evaluator to a MLGO-ed pass is the responsibilty of the pass author, and subsequent patches will do so for the current set of mlgo passes, and offer an API to easily "just opt in" by default when mlgo-ing a new pass.
The data protocol is that of the training logger: the host sees a training log doled out observation by observation by reading from one of the files, and passes back its advice as a serialized tensor (i.e. tensor value memory dump) via the other file.
There are some differences wrt the log seen during training: the interactive model doesn't currently include the outcome (because it should be identical to the decision, and it's also not present in the "release" mode); and partial rewards aren't currently communicated back.
The assumption - just like with the training logger - is that the host is co-located, thus avoiding any endianness concerns. In a distributed environment, it is up to the hosting infrastructure to intermediate that.
Differential Revision: https://reviews.llvm.org/D142642
show more ...
|
Revision tags: llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4 |
|
#
345ed58e |
| 13-May-2022 |
Simon Pilgrim <llvm-dev@redking.me.uk> |
Fix implicit double -> float truncation warnings. NFCI.
|
Revision tags: llvmorg-14.0.3 |
|
#
c35ad9ee |
| 27-Apr-2022 |
Mircea Trofin <mtrofin@google.com> |
[mlgo] Support exposing more features than those supported by models
This allows the compiler to support more features than those supported by a model. The only requirement (development mode only) i
[mlgo] Support exposing more features than those supported by models
This allows the compiler to support more features than those supported by a model. The only requirement (development mode only) is that the new features must be appended at the end of the list of features requested from the model. The support is transparent to compiler code: for unsupported features, we provide a valid buffer to copy their values; it's just that this buffer is disconnected from the model, so insofar as the model is concerned (AOT or development mode), these features don't exist. The buffers are allocated at setup - meaning, at steady state, there is no extra allocation (maintaining the current invariant). These buffers has 2 roles: one, keep the compiler code simple. Second, allow logging their values in development mode. The latter allows retraining a model supporting the larger feature set starting from traces produced with the old model.
For release mode (AOT-ed models), this decouples compiler evolution from model evolution, which we want in scenarios where the toolchain is frequently rebuilt and redeployed: we can first deploy the new features, and continue working with the older model, until a new model is made available, which can then be picked up the next time the compiler is built.
Differential Revision: https://reviews.llvm.org/D124565
show more ...
|
Revision tags: llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
#
059e0347 |
| 07-Dec-2021 |
Mircea Trofin <mtrofin@google.com> |
[NFC][mlgo] Generalize model runner interface
This prepares it for the regalloc work. Part of it is making model evaluation accross 'development' and 'release' scenarios more reusable. This patch: -
[NFC][mlgo] Generalize model runner interface
This prepares it for the regalloc work. Part of it is making model evaluation accross 'development' and 'release' scenarios more reusable. This patch: - extends support to tensors of any shape (not just scalars, like we had in the inliner -Oz case). While the tensor shape can be anything, we assume row-major layout and expose the tensor as a buffer. - exposes the NoInferenceModelRunner, which we use in the 'development' mode to keep the evaluation code path consistent and simplify logging, as we'll want to reuse it in the regalloc case.
Differential Revision: https://reviews.llvm.org/D115306
show more ...
|