History log of /llvm-project/llvm/lib/Analysis/TensorSpec.cpp (Results 1 – 9 of 9)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1
# 7d31d3b0 28-Jan-2023 Mircea Trofin <mtrofin@google.com>

Fix "not all control paths return a value" introduced by D142642


# 5b8dc7c8 26-Jan-2023 Mircea Trofin <mtrofin@google.com>

[mlgo] Introduce an "InteractiveModelRunner"

This is a model runner for ML researchers using environments like
CompilerGym. In such environments, researchers host the compiler and
want to be able to

[mlgo] Introduce an "InteractiveModelRunner"

This is a model runner for ML researchers using environments like
CompilerGym. In such environments, researchers host the compiler and
want to be able to observe the problem space (features) at each decision
step of some optimization pass, at which point the compiler is stopped,
waiting for the host makes a decision and provide an advice back to
the compiler, which then continues its normal operation, and so on.

The InteractiveModelRunner supports this scenario for the feature set
exposed by the compiler at a given time. It uses 2 files - ideally FIFO
pipes - one to pass data to the host, the other to get advices back from
the host. This means this scenario is supported with no special
dependencies. The file creation and deletion is the responsibility of
the host. Hooking up this model evaluator to a MLGO-ed pass is the
responsibilty of the pass author, and subsequent patches will do so for
the current set of mlgo passes, and offer an API to easily "just opt in"
by default when mlgo-ing a new pass.

The data protocol is that of the training logger: the host sees a training
log doled out observation by observation by reading from one of the
files, and passes back its advice as a serialized tensor (i.e. tensor value
memory dump) via the other file.

There are some differences wrt the log seen during training: the
interactive model doesn't currently include the outcome (because it should be
identical to the decision, and it's also not present in the "release"
mode); and partial rewards aren't currently communicated back.

The assumption - just like with the training logger - is that the host
is co-located, thus avoiding any endianness concerns. In a distributed
environment, it is up to the hosting infrastructure to intermediate
that.

Differential Revision: https://reviews.llvm.org/D142642

show more ...


Revision tags: llvmorg-17-init, llvmorg-15.0.7
# d4b6fcb3 14-Dec-2022 Fangrui Song <i@maskray.me>

[Analysis] llvm::Optional => std::optional


# 4c97745b 06-Dec-2022 Mircea Trofin <mtrofin@google.com>

Reapply "[mlgo] Dependency-free training mode logger"

This reverts commit 8abe7b11f74bea63d3134c144137b72146da0c7b.

Added the missing cast which was causing a build problem on certain compilers.


# 8abe7b11 06-Dec-2022 Florian Hahn <flo@fhahn.com>

Revert "[mlgo] Dependency-free training mode logger"

This reverts commit c5ff6f72342e0a4b0ba2ec9f603bedca86721e80.

This breaks building on macOS:

FAILED: lib/Analysis/CMakeFiles/LLVMAnalysis.dir/T

Revert "[mlgo] Dependency-free training mode logger"

This reverts commit c5ff6f72342e0a4b0ba2ec9f603bedca86721e80.

This breaks building on macOS:

FAILED: lib/Analysis/CMakeFiles/LLVMAnalysis.dir/TensorSpec.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -DBUILD_EXAMPLES -DGTEST_HAS_RTTI=0 -D_DEBUG -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/clang-build/lib/Analysis -I/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/lib/Analysis -I/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/clang-build/include -I/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/include -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic -Wno-long-long -Wc++98-compat-extra-semi -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -O3 -DNDEBUG -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk -mmacosx-version-min=10.14 -fno-exceptions -fno-rtti -UNDEBUG -std=c++17 -MD -MT lib/Analysis/CMakeFiles/LLVMAnalysis.dir/TensorSpec.cpp.o -MF lib/Analysis/CMakeFiles/LLVMAnalysis.dir/TensorSpec.cpp.o.d -o lib/Analysis/CMakeFiles/LLVMAnalysis.dir/TensorSpec.cpp.o -c /Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/lib/Analysis/TensorSpec.cpp
In file included from /Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/lib/Analysis/TensorSpec.cpp:16:
In file included from /Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/include/llvm/Analysis/TensorSpec.h:16:
/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/include/llvm/Support/JSON.h:354:29: error: non-constant-expression cannot be narrowed from type 'unsigned long' to 'int64_t' (aka 'long long') in initializer list [-Wc++11-narrowing]
create<int64_t>(int64_t{I});
^
/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/lib/Analysis/TensorSpec.cpp:55:18: note: in instantiation of function template specialization 'llvm::json::Value::Value<unsigned long, void, void, void>' requested here
OS.value(D);
^
/Users/buildslave/jenkins/workspace/clang-stage1-cmake-RA-incremental/llvm-project/llvm/include/llvm/Support/JSON.h:354:29: note: insert an explicit cast to silence this issue
create<int64_t>(int64_t{I});
^
static_cast<int64_t>( )
1 error generated.

https://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-incremental/33120/consoleFull#-145995569149ba4694-19c4-4d7e-bec5-911270d8a58c

show more ...


# c5ff6f72 05-Dec-2022 Mircea Trofin <mtrofin@google.com>

[mlgo] Dependency-free training mode logger

This is the next step in dropping the dependency on protobuf.

The simple logger produces an output consisting of lines of json
strings. Tensor values - w

[mlgo] Dependency-free training mode logger

This is the next step in dropping the dependency on protobuf.

The simple logger produces an output consisting of lines of json
strings. Tensor values - which should constitute the bulk of the data -
are serialized as raw byte buffers. This allows for light-weight reading
of the values.

The next step is to switch the training logic to the new logging format,
following which the protobuf-based logger will be dropped, together with
the training dependency on protobuf.

Subsequent changes will also stop buffering and stream, instead - the
buffering model is just as a convenient point-in-time.

Differential Revision: https://reviews.llvm.org/D139370

show more ...


# 19aff0f3 03-Dec-2022 Kazu Hirata <kazu@google.com>

[Analysis] Use std::nullopt instead of None (NFC)

This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount o

[Analysis] Use std::nullopt instead of None (NFC)

This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716

show more ...


# 1ee3bb17 30-Nov-2022 Mircea Trofin <mtrofin@google.com>

[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail

It's an artifact very specific to using TFAgents during training, so it
belongs with ModelUnderTrainingRunner.

Differential Revision: h

[mlgo][nfc] Make `LoggedFeatureSpec` an implementation detail

It's an artifact very specific to using TFAgents during training, so it
belongs with ModelUnderTrainingRunner.

Differential Revision: https://reviews.llvm.org/D139031

show more ...


Revision tags: llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2
# b1fa5ac3 25-Apr-2022 Mircea Trofin <mtrofin@google.com>

[mlgo] Factor out TensorSpec

This is a simple datatype with a few JSON utilities, and is independent
of the underlying executor. The main motivation is to allow taking a
dependency on it on the AOT

[mlgo] Factor out TensorSpec

This is a simple datatype with a few JSON utilities, and is independent
of the underlying executor. The main motivation is to allow taking a
dependency on it on the AOT side, and allow us build a correctly-sized
buffer in the cases when the requested feature isn't supported by the
model. This, in turn, allows us to grow the feature set supported by the
compiler in a backward-compatible way; and also collect traces exposing
the new features, but starting off the older model, and continue
training from those new traces.

Differential Revision: https://reviews.llvm.org/D124417

show more ...