#
d010ec6a |
| 22-Aug-2024 |
Chris Apple <cja-private@pm.me> |
[clang][rtsan] Introduce realtime sanitizer codegen and driver (#102622)
Introduce the `-fsanitize=realtime` flag in clang driver
Plug in the RealtimeSanitizer PassManager pass in Codegen, and at
[clang][rtsan] Introduce realtime sanitizer codegen and driver (#102622)
Introduce the `-fsanitize=realtime` flag in clang driver
Plug in the RealtimeSanitizer PassManager pass in Codegen, and attribute
a function based on if it has the `[[clang::nonblocking]]` function
effect.
show more ...
|
Revision tags: llvmorg-19.1.0-rc3 |
|
#
2c8bd4a7 |
| 13-Aug-2024 |
Helena Kotas <hekotas@microsoft.com> |
[HLSL] Mark exported functions with "hlsl.export" attribute (#102275)
Marks exported functions with `"hlsl.export"` attribute. This
information will be later used by DXILFinalizeLinkage pass (comin
[HLSL] Mark exported functions with "hlsl.export" attribute (#102275)
Marks exported functions with `"hlsl.export"` attribute. This
information will be later used by DXILFinalizeLinkage pass (coming soon)
to determine which functions should have internal linkage in the final
DXIL code.
Related to #llvm/llvm-project#92071
show more ...
|
#
80525dfc |
| 13-Aug-2024 |
Johannes Doerfert <johannes@jdoerfert.de> |
[Offload][CUDA] Allow CUDA kernels to use LLVM/Offload (#94549)
Through the new `-foffload-via-llvm` flag, CUDA kernels can now be
lowered to the LLVM/Offload API. On the Clang side, this is simply
[Offload][CUDA] Allow CUDA kernels to use LLVM/Offload (#94549)
Through the new `-foffload-via-llvm` flag, CUDA kernels can now be
lowered to the LLVM/Offload API. On the Clang side, this is simply done
by using the OpenMP offload toolchain and emitting calls to `llvm*`
functions to orchestrate the kernel launch rather than `cuda*`
functions. These `llvm*` functions are implemented on top of the
existing LLVM/Offload API.
As we are about to redefine the Offload API, this wil help us in the
design process as a second offload language.
We do not support any CUDA APIs yet, however, we could:
https://www.osti.gov/servlets/purl/1892137
For proper host execution we need to resurrect/rebase
https://tianshilei.me/wp-content/uploads/2021/12/llpp-2021.pdf
(which was designed for debugging).
```
❯❯❯ cat test.cu
extern "C" {
void *llvm_omp_target_alloc_shared(size_t Size, int DeviceNum);
void llvm_omp_target_free_shared(void *DevicePtr, int DeviceNum);
}
__global__ void square(int *A) { *A = 42; }
int main(int argc, char **argv) {
int DevNo = 0;
int *Ptr = reinterpret_cast<int *>(llvm_omp_target_alloc_shared(4, DevNo));
*Ptr = 7;
printf("Ptr %p, *Ptr %i\n", Ptr, *Ptr);
square<<<1, 1>>>(Ptr);
printf("Ptr %p, *Ptr %i\n", Ptr, *Ptr);
llvm_omp_target_free_shared(Ptr, DevNo);
}
❯❯❯ clang++ test.cu -O3 -o test123 -foffload-via-llvm --offload-arch=native
❯❯❯ llvm-objdump --offloading test123
test123: file format elf64-x86-64
OFFLOADING IMAGE [0]:
kind elf
arch gfx90a
triple amdgcn-amd-amdhsa
producer openmp
❯❯❯ LIBOMPTARGET_INFO=16 ./test123
Ptr 0x155448ac8000, *Ptr 7
Ptr 0x155448ac8000, *Ptr 42
```
show more ...
|
#
d179acd0 |
| 09-Aug-2024 |
Ahmed Bougacha <ahmed@bougacha.org> |
[clang] Implement -fptrauth-auth-traps. (#102417)
This provides -fptrauth-auth-traps, which at the frontend level only
controls the addition of the "ptrauth-auth-traps" function attribute.
The a
[clang] Implement -fptrauth-auth-traps. (#102417)
This provides -fptrauth-auth-traps, which at the frontend level only
controls the addition of the "ptrauth-auth-traps" function attribute.
The attribute in turn controls various aspects of backend codegen, by
providing the guarantee that every "auth" operation generated will trap
on failure.
This can either be delegated to the hardware (if AArch64 FPAC is known
to be available), in which case this attribute doesn't change codegen.
Otherwise, if FPAC isn't available, this asks the backend to emit
additional instructions to check and trap on auth failure.
show more ...
|
#
2eb6e30f |
| 09-Aug-2024 |
Ahmed Bougacha <ahmed@bougacha.org> |
[clang] Wire -fptrauth-returns to "ptrauth-returns" fn attribute. (#102416)
We already ended up with -fptrauth-returns, the feature macro, the lang
opt, and the actual backend lowering.
The only
[clang] Wire -fptrauth-returns to "ptrauth-returns" fn attribute. (#102416)
We already ended up with -fptrauth-returns, the feature macro, the lang
opt, and the actual backend lowering.
The only part left is threading it all through PointerAuthOptions, to
drive the addition of the "ptrauth-returns" attribute to generated
functions.
While there, do minor cleanup on ptrauth-function-attributes.c.
This also adds ptrauth_key_return_address to ptrauth.h.
show more ...
|
Revision tags: llvmorg-19.1.0-rc2 |
|
#
ea98dc8b |
| 27-Jul-2024 |
Jacek Caban <jacek@codeweavers.com> |
[clang][ARM64EC] Add support for hybrid_patchable attribute. (#99478)
|
Revision tags: llvmorg-19.1.0-rc1, llvmorg-20-init |
|
#
b8721fa0 |
| 23-Jul-2024 |
Ahmed Bougacha <ahmed@bougacha.org> |
[AArch64][PAC] Sign block addresses used in indirectbr. (#97647)
Enabled in clang using:
-fptrauth-indirect-gotos
and at the IR level using function attribute:
"ptrauth-indirect-gotos"
[AArch64][PAC] Sign block addresses used in indirectbr. (#97647)
Enabled in clang using:
-fptrauth-indirect-gotos
and at the IR level using function attribute:
"ptrauth-indirect-gotos"
Signing uses IA and a per-function integer discriminator. The
discriminator isn't ABI-visible, and is currently:
ptrauth_string_discriminator("<function_name> blockaddress")
A sufficiently sophisticated frontend could benefit from per-indirectbr
discrimination, which would need additional machinery, such as allowing
"ptrauth" bundles on indirectbr. For our purposes, the simple scheme
above is sufficient.
This approach doesn't support subtracting label addresses and using
the result as offsets, because each label address is signed.
Pointer arithmetic on signed pointers corrupts the signature bits,
and because label address expressions aren't typed beyond void*,
we can't do anything reliably intelligent on the arithmetic exprs.
Not signing addresses when used to form offsets would allow
easily hijacking control flow by overwriting the offset.
This diagnoses the basic cases (`&&lbl2 - &&lbl1`) in the frontend,
while we evaluate either alternative implementations (e.g., lowering
blockaddress to a bb number, and indirectbr to a checked jump-table),
or better diagnostics (both at the frontend level and on unencodable
IR constants).
show more ...
|
#
c719d7b3 |
| 19-Jul-2024 |
Alexandros Lamprineas <alexandros.lamprineas@arm.com> |
[FMV][AArch64] Do not optimize away runtime checks for implied features (#99522)
When generating the body of the ifunc resolver, clang skips runtime
checks for features that are implied from the co
[FMV][AArch64] Do not optimize away runtime checks for implied features (#99522)
When generating the body of the ifunc resolver, clang skips runtime
checks for features that are implied from the command line. We bend this
rule for certain features (memtag, bti, dgh), but this happens quite
arbitrarily in my opinion. The reasoning is that some features are in
the HINT instruction space, meaning they operate as NOPs if the hardware
does not support them. Still the user wants to detect their presence
with runtime checks. See #90928 for details.
I think we should always perform runtime checks regardless of the
feature and then try to statically resolve calls whenever a function is
compiled with a sufficiently high set of architecture features (so
including target/target_version/target_clones attributes, and command
line options). This is what GCC does. We have an open PR in LLVM
GlobalOpt since it was suggested not to perform such codegen
optimizations in clang anyway. See #87939.
show more ...
|
#
f6b06b42 |
| 18-Jul-2024 |
Akira Hatanaka <ahatanak@gmail.com> |
[PAC] Implement function pointer re-signing (#98847)
Re-signing occurs when function type discrimination is enabled and a
function pointer is converted to another function pointer type that
requir
[PAC] Implement function pointer re-signing (#98847)
Re-signing occurs when function type discrimination is enabled and a
function pointer is converted to another function pointer type that
requires signing using a different discriminator. A function pointer is
re-signed using discriminator zero when it's converted to a pointer to a
non-function type such as `void*`.
---------
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
Co-authored-by: John McCall <rjmccall@apple.com>
show more ...
|
#
9ad72df5 |
| 15-Jul-2024 |
Mariya Podchishchaeva <mariya.podchishchaeva@intel.com> |
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for th
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.
This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].
Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419
---------
Co-authored-by: John McCall <rjmccall@gmail.com>
show more ...
|
#
1b8ab2f0 |
| 27-Jun-2024 |
Oliver Hunt <oliver@apple.com> |
[clang] Implement pointer authentication for C++ virtual functions, v-tables, and VTTs (#94056)
Virtual function pointer entries in v-tables are signed with address
discrimination in addition to de
[clang] Implement pointer authentication for C++ virtual functions, v-tables, and VTTs (#94056)
Virtual function pointer entries in v-tables are signed with address
discrimination in addition to declaration-based discrimination, where an
integer discriminator the string hash (see
`ptrauth_string_discriminator`) of the mangled name of the overridden
method. This notably provides diversity based on the full signature of
the overridden method, including the method name and parameter types.
This patch introduces ItaniumVTableContext logic to find the original
declaration of the overridden method.
On AArch64, these pointers are signed using the `IA` key (the
process-independent code key.)
V-table pointers can be signed with either no discrimination, or a
similar scheme using address and decl-based discrimination. In this
case, the integer discriminator is the string hash of the mangled
v-table identifier of the class that originally introduced the vtable
pointer.
On AArch64, these pointers are signed using the `DA` key (the
process-independent data key.)
Not using discrimination allows attackers to simply copy valid v-table
pointers from one object to another. However, using a uniform
discriminator of 0 does have positive performance and code-size
implications on AArch64, and diversity for the most important v-table
access pattern (virtual dispatch) is already better assured by the
signing schemas used on the virtual functions. It is also known that
some code in practice copies objects containing v-tables with `memcpy`,
and while this is not permitted formally, it is something that may be
invasive to eliminate.
This is controlled by:
```
-fptrauth-vtable-pointer-type-discrimination
-fptrauth-vtable-pointer-address-discrimination
```
In addition, this provides fine-grained controls in the
ptrauth_vtable_pointer attribute, which allows overriding the default
ptrauth schema for vtable pointers on a given class hierarchy, e.g.:
```
[[clang::ptrauth_vtable_pointer(no_authentication, no_address_discrimination,
no_extra_discrimination)]]
[[clang::ptrauth_vtable_pointer(default_key, default_address_discrimination,
custom_discrimination, 0xf00d)]]
```
The override is then mangled as a parametrized vendor extension:
```
"__vtptrauth" I
<key>
<addressDiscriminated>
<extraDiscriminator>
E
```
To support this attribute, this patch adds a small extension to the
attribute-emitter tablegen backend.
Note that there are known areas where signing is either missing
altogether or can be strengthened. Some will be addressed in later
changes (e.g., member function pointers, some RTTI).
`dynamic_cast` in particular is handled by emitting an artificial
v-table pointer load (in a way that always authenticates it) before the
runtime call itself, as the runtime doesn't have enough information
today to properly authenticate it. Instead, the runtime is currently
expected to strip the v-table pointer.
---------
Co-authored-by: John McCall <rjmccall@apple.com>
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
show more ...
|
#
d75f9dd1 |
| 24-Jun-2024 |
Stephen Tozer <stephen.tozer@sony.com> |
Revert "[IR][NFC] Update IRBuilder to use InsertPosition (#96497)"
Reverts the above commit, as it updates a common header function and did not update all callsites:
https://lab.llvm.org/buildbot
Revert "[IR][NFC] Update IRBuilder to use InsertPosition (#96497)"
Reverts the above commit, as it updates a common header function and did not update all callsites:
https://lab.llvm.org/buildbot/#/builders/29/builds/382
This reverts commit 6481dc57612671ebe77fe9c34214fba94e1b3b27.
show more ...
|
#
6481dc57 |
| 24-Jun-2024 |
Stephen Tozer <stephen.tozer@sony.com> |
[IR][NFC] Update IRBuilder to use InsertPosition (#96497)
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
[IR][NFC] Update IRBuilder to use InsertPosition (#96497)
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.
This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.
show more ...
|
#
e23250ec |
| 21-Jun-2024 |
Ahmed Bougacha <ahmed@bougacha.org> |
[clang] Implement function pointer signing and authenticated function calls (#93906)
The functions are currently always signed/authenticated with zero
discriminator.
Co-Authored-By: John McCall
[clang] Implement function pointer signing and authenticated function calls (#93906)
The functions are currently always signed/authenticated with zero
discriminator.
Co-Authored-By: John McCall <rjmccall@apple.com>
show more ...
|
#
80f88148 |
| 20-Jun-2024 |
Stephen Tozer <stephen.tozer@sony.com> |
[LLVM] Add InsertPosition union-type to remove overloads of Instruction-creation (#94226)
This patch simplifies instruction creation by replacing all overloads of
instruction constructors/Create me
[LLVM] Add InsertPosition union-type to remove overloads of Instruction-creation (#94226)
This patch simplifies instruction creation by replacing all overloads of
instruction constructors/Create methods that are identical other than
the Instruction *InsertBefore/BasicBlock *InsertAtEnd/BasicBlock::iterator
InsertBefore argument with a single version that takes an InsertPosition
argument. The InsertPosition class can be implicitly constructed from
any of the above, internally converting them to the appropriate
BasicBlock::iterator value which can then be used to insert the
instruction (or to not insert it if an invalid iterator is passed).
The upshot of this is that code will be deduplicated, and all callsites
will switch to calling the new unified version without any changes
needed to make the compiler happy. There is at least one exception to
this; the construction of InsertPosition is a user-defined conversion,
so any caller that was already relying on a different user-defined
conversion won't work. In all of LLVM and Clang this happens exactly
once: at clang/lib/CodeGen/CGExpr.cpp:123 we try to construct an alloca
with an AssertingVH<Instruction> argument, which must now be cast to an
Instruction* by using `&*`. If this is more common elsewhere, it could
be fixed by adding an appropriate constructor to InsertPosition.
show more ...
|
Revision tags: llvmorg-18.1.8 |
|
#
48f8130a |
| 11-Jun-2024 |
Alexander Shaposhnikov <6532716+alexander-shaposhnikov@users.noreply.github.com> |
[Clang][Sanitizers] Add numerical sanitizer (#93783)
Add plumbing for the numerical sanitizer on Clang's side.
|
#
69e9e779 |
| 11-Jun-2024 |
Pavel Samolysov <samolisov@gmail.com> |
[clang] Replace X && isa<Y>(X) with isa_and_nonnull<Y>(X). NFC (#94987)
This addresses a clang-tidy suggestion.
|
Revision tags: llvmorg-18.1.7 |
|
#
3575d23c |
| 20-May-2024 |
Ahmed Bougacha <ahmed@bougacha.org> |
[clang][CodeGen] Remove unused LValue::getAddress CGF arg. (#92465)
This is in effect a revert of f139ae3d93797, as we have since gained a
more sophisticated way of doing extra IRGen with the addit
[clang][CodeGen] Remove unused LValue::getAddress CGF arg. (#92465)
This is in effect a revert of f139ae3d93797, as we have since gained a
more sophisticated way of doing extra IRGen with the addition of
RawAddress in #86923.
show more ...
|
Revision tags: llvmorg-18.1.6 |
|
#
932ca856 |
| 17-May-2024 |
Romaric Jodin <89833130+rjodinchr@users.noreply.github.com> |
libclc: remove __attribute__((assume)) for clspv targets (#92126)
Instead add a proper attribute in clang, and add convert it to function
metadata to keep the information in the IR. The goal is to
libclc: remove __attribute__((assume)) for clspv targets (#92126)
Instead add a proper attribute in clang, and add convert it to function
metadata to keep the information in the IR. The goal is to remove the
dependency on __attribute__((assume)) that should have not be there in
the first place.
Ref https://github.com/llvm/llvm-project/pull/84934
show more ...
|
#
e08f1fda |
| 14-May-2024 |
Nathan Gauër <brioche@google.com> |
[clang][SPIR-V] Always add convergence intrinsics (#88918)
PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is w
[clang][SPIR-V] Always add convergence intrinsics (#88918)
PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is when
parsing the condition, the loopstack doesn't yet reflect the correct
values, as expected since we are not yet in the loop.
However, convergence tokens should sometimes already be available. The
solution which seemed the simplest is to greedily generate the tokens
when we generate SPIR-V.
Fixes #88144
---------
Signed-off-by: Nathan Gauër <brioche@google.com>
show more ...
|
#
deffae5d |
| 11-May-2024 |
Kazu Hirata <kazu@google.com> |
[clang] Use StringRef::operator== instead of StringRef::equals (NFC) (#91844)
I'm planning to remove StringRef::equals in favor of
StringRef::operator==.
- StringRef::operator==/!= outnumber Str
[clang] Use StringRef::operator== instead of StringRef::equals (NFC) (#91844)
I'm planning to remove StringRef::equals in favor of
StringRef::operator==.
- StringRef::operator==/!= outnumber StringRef::equals by a factor of
24 under clang/ in terms of their usage.
- The elimination of StringRef::equals brings StringRef closer to
std::string_view, which has operator== but not equals.
- S == "foo" is more readable than S.equals("foo"), especially for
!Long.Expression.equals("str") vs Long.Expression != "str".
show more ...
|
#
80420229 |
| 03-May-2024 |
Pavel Iliin <Pavel.Iliin@arm.com> |
[FMV][AArch64] Don't optimize backward compatible features in resolver. (#90928)
For arch64 features, such as Branch Target Identification or MTE (Memory
Tagging Extension), compatible with targets
[FMV][AArch64] Don't optimize backward compatible features in resolver. (#90928)
For arch64 features, such as Branch Target Identification or MTE (Memory
Tagging Extension), compatible with targets that lack their support we
may encounter scenarios where a binary compiled with MTE for example is
executed on both MTE and non-MTE hardware and we still need to detect at
runtime whether the MTE feature is available to choose the appropriate
function version.
So, we cannot optimize the function multi versioning resolver by
removing checks for these features enabled for the target during
compilation.
show more ...
|
#
64211710 |
| 03-May-2024 |
cor3ntin <corentinjabot@gmail.com> |
[Clang] Implement P2809: Trivial infinite loops are not Undefined Behavior (#90066)
https://wg21.link/P2809R3
This is applied as a DR to C++11 (C++98 did not guarantee forward
progress and is le
[Clang] Implement P2809: Trivial infinite loops are not Undefined Behavior (#90066)
https://wg21.link/P2809R3
This is applied as a DR to C++11 (C++98 did not guarantee forward
progress and is left untouched)
As an extension (and to preserve existing behavior in C), we consider
all controlling expression that can be constant folded
in the front end, not just standard constant expressions.
show more ...
|
Revision tags: llvmorg-18.1.5 |
|
#
d72146f4 |
| 29-Apr-2024 |
Utkarsh Saxena <usx@google.com> |
Re-apply "Emit missing cleanups for stmt-expr" and other commits (#89154)
Latest diff:
https://github.com/llvm/llvm-project/pull/89154/files/f1ab4c2677394bbfc985d9680d5eecd7b2e6a882..adf9bc902baddb
Re-apply "Emit missing cleanups for stmt-expr" and other commits (#89154)
Latest diff:
https://github.com/llvm/llvm-project/pull/89154/files/f1ab4c2677394bbfc985d9680d5eecd7b2e6a882..adf9bc902baddb156c83ce0f8ec03c142e806d45
We address two additional bugs here:
### Problem 1: Deactivated normal cleanup still runs, leading to
double-free
Consider the following:
```cpp
struct A { };
struct B { B(const A&); };
struct S {
A a;
B b;
};
int AcceptS(S s);
void Accept2(int x, int y);
void Test() {
Accept2(AcceptS({.a = A{}, .b = A{}}), ({ return; 0; }));
}
```
We add cleanups as follows:
1. push dtor for field `S::a`
2. push dtor for temp `A{}` (used by ` B(const A&)` in `.b = A{}`)
3. push dtor for field `S::b`
4. Deactivate 3 `S::b`-> This pops the cleanup.
5. Deactivate 1 `S::a` -> Does not pop the cleanup as *2* is top. Should
create _active flag_!!
6. push dtor for `~S()`.
7. ...
It is important to deactivate **5** using active flags. Without the
active flags, the `return` will fallthrough it and would run both `~S()`
and dtor `S::a` leading to **double free** of `~A()`.
In this patch, we unconditionally emit active flags while deactivating
normal cleanups. These flags are deleted later by the `AllocaTracker` if
the cleanup is not emitted.
### Problem 2: Missing cleanup for conditional lifetime extension
We push 2 cleanups for lifetime-extended cleanup. The first cleanup is
useful if we exit from the middle of the expression (stmt-expr/coro
suspensions). This is deactivated after full-expr, and a new cleanup is
pushed, extending the lifetime of the temporaries (to the scope of the
reference being initialized).
If this lifetime extension happens to be conditional, then we use active
flags to remember whether the branch was taken and if the object was
initialized.
Previously, we used a **single** active flag, which was used by both
cleanups. This is wrong because the first cleanup will be forced to
deactivate after the full-expr and therefore this **active** flag will
always be **inactive**. The dtor for the lifetime extended entity would
not run as it always sees an **inactive** flag.
In this patch, we solve this using two separate active flags for both
cleanups. Both of them are activated if the conditional branch is taken,
but only one of them is deactivated after the full-expr.
---
Fixes https://github.com/llvm/llvm-project/issues/63818
Fixes https://github.com/llvm/llvm-project/issues/88478
---
Previous PR logs:
1. https://github.com/llvm/llvm-project/pull/85398
2. https://github.com/llvm/llvm-project/pull/88670
3. https://github.com/llvm/llvm-project/pull/88751
4. https://github.com/llvm/llvm-project/pull/88884
show more ...
|
Revision tags: llvmorg-18.1.4 |
|
#
9d8be240 |
| 16-Apr-2024 |
Utkarsh Saxena <usx@google.com> |
Revert "[codegen] Emit missing cleanups for stmt-expr and coro suspensions" and related commits (#88884)
The original change caused widespread breakages in msan/ubsan tests and
causes `use-after-fr
Revert "[codegen] Emit missing cleanups for stmt-expr and coro suspensions" and related commits (#88884)
The original change caused widespread breakages in msan/ubsan tests and
causes `use-after-free`. Most likely we are adding more cleanups than
necessary.
show more ...
|