Revision tags: llvmorg-21-init, llvmorg-19.1.7 |
|
#
cfe26358 |
| 11-Jan-2025 |
Timm Baeder <tbaeder@redhat.com> |
Reapply "[clang] Avoid re-evaluating field bitwidth" (#122289)
|
#
26aa20a3 |
| 09-Jan-2025 |
Reid Kleckner <rnk@google.com> |
Use range-based for to iterate over fields in record layout, NFCI (#122029)
Move the common case of FieldDecl::getFieldIndex() inline to mitigate
the cost of removing the extra `FieldNo` induction
Use range-based for to iterate over fields in record layout, NFCI (#122029)
Move the common case of FieldDecl::getFieldIndex() inline to mitigate
the cost of removing the extra `FieldNo` induction variable.
Also rename isNoUniqueAddress parameter to isNonVirtualBaseType, which
appears to be more accurate. I think the current name is just a
consequence of autocomplete gone wrong.
show more ...
|
#
59bdea24 |
| 08-Jan-2025 |
Timm Bäder <tbaeder@redhat.com> |
Revert "[clang] Avoid re-evaluating field bitwidth (#117732)"
This reverts commit 81fc3add1e627c23b7270fe2739cdacc09063e54.
This breaks some LLDB tests, e.g. SymbolFile/DWARF/x86/no_unique_address-
Revert "[clang] Avoid re-evaluating field bitwidth (#117732)"
This reverts commit 81fc3add1e627c23b7270fe2739cdacc09063e54.
This breaks some LLDB tests, e.g. SymbolFile/DWARF/x86/no_unique_address-with-bitfields.cpp:
lldb: ../llvm-project/clang/lib/AST/Decl.cpp:4604: unsigned int clang::FieldDecl::getBitWidthValue() const: Assertion `isa<ConstantExpr>(getBitWidth())' failed.
show more ...
|
#
81fc3add |
| 08-Jan-2025 |
Timm Baeder <tbaeder@redhat.com> |
[clang] Avoid re-evaluating field bitwidth (#117732)
Save the bitwidth value as a `ConstantExpr` with the value set. Remove the `ASTContext` parameter from `getBitWidthValue()`, so the latter simply
[clang] Avoid re-evaluating field bitwidth (#117732)
Save the bitwidth value as a `ConstantExpr` with the value set. Remove the `ASTContext` parameter from `getBitWidthValue()`, so the latter simply returns the value from the `ConstantExpr` instead of constant-evaluating the bitwidth expression every time it is called.
show more ...
|
Revision tags: llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init |
|
#
4497ec29 |
| 16-Jul-2024 |
Michael Buch <michaelbuch12@gmail.com> |
[clang][CGRecordLayout] Remove dependency on isZeroSize (#96422)
This is a follow-up from the conversation starting at
https://github.com/llvm/llvm-project/pull/93809#issuecomment-2173729801
The
[clang][CGRecordLayout] Remove dependency on isZeroSize (#96422)
This is a follow-up from the conversation starting at
https://github.com/llvm/llvm-project/pull/93809#issuecomment-2173729801
The root problem that motivated the change are external AST sources that
compute `ASTRecordLayout`s themselves instead of letting Clang compute
them from the AST. One such example is LLDB using DWARF to get the
definitive offsets and sizes of C++ structures. Such layouts should be
considered correct (modulo buggy DWARF), but various assertions and
lowering logic around the `CGRecordLayoutBuilder` relies on the AST
having `[[no_unique_address]]` attached to them. This is a
layout-altering attribute which is not encoded in DWARF. This causes us
LLDB to trip over the various LLVM<->Clang layout consistency checks.
There has been precedent for avoiding such layout-altering attributes
from affecting lowering with externally-provided layouts (e.g., packed
structs).
This patch proposes to replace the `isZeroSize` checks in
`CGRecordLayoutBuilder` (which roughly means "empty field with
[[no_unique_address]]") with checks for
`CodeGen::isEmptyField`/`CodeGen::isEmptyRecord`.
**Details**
The main strategy here was to change the `isZeroSize` check in
`CGRecordLowering::accumulateFields` and
`CGRecordLowering::accumulateBases` to use the `isEmptyXXX` APIs
instead, preventing empty fields from being added to the `Members` and
`Bases` structures. The rest of the changes fall out from here, to
prevent lookups into these structures (for field numbers or base
indices) from failing.
Added `isEmptyRecordForLayout` and `isEmptyFieldForLayout` (open to
better naming suggestions). The main difference to the existing
`isEmptyRecord`/`isEmptyField` APIs, is that the `isEmptyXXXForLayout`
counterparts don't have special treatment for `unnamed bitfields`/arrays
and also treat fields of empty types as if they had
`[[no_unique_address]]` (i.e., just like the `AsIfNoUniqueAddr` in
`isEmptyField` does).
show more ...
|
#
9ad72df5 |
| 15-Jul-2024 |
Mariya Podchishchaeva <mariya.podchishchaeva@intel.com> |
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for th
[clang] Use different memory layout type for _BitInt(N) in LLVM IR (#91364)
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.
This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].
Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419
---------
Co-authored-by: John McCall <rjmccall@gmail.com>
show more ...
|
Revision tags: llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6 |
|
#
1d6bf0ca |
| 12-May-2024 |
Nathan Sidwell <nathan@acm.org> |
[clang][NFC] Remove class layout scissor (#89055)
PR #87090 amended `accumulateBitfields` to do the correct clipping. The scissor is no longer necessary and `checkBitfieldClipping` can compute its
[clang][NFC] Remove class layout scissor (#89055)
PR #87090 amended `accumulateBitfields` to do the correct clipping. The scissor is no longer necessary and `checkBitfieldClipping` can compute its location directly when needed.
show more ...
|
Revision tags: llvmorg-18.1.5, llvmorg-18.1.4 |
|
#
69d861e1 |
| 15-Apr-2024 |
Nathan Sidwell <nathan@acm.org> |
[clang] Move tailclipping to bitfield allocation (#87090)
Move bitfield access clipping to bitfield access computation.
|
Revision tags: llvmorg-18.1.3 |
|
#
ee994750 |
| 01-Apr-2024 |
Nathan Sidwell <nathan@acm.org> |
[clang] Fix bitfield access unit for vbase corner case (#87238)
This fixes #87227, a vbase can be placed below nvsize when empty members and/or bases are in play. We must account for that.
|
#
49839f97 |
| 22-Mar-2024 |
Nathan Sidwell <nathan@acm.org> |
[clang] Better SysV bitfield access units (#65742)
Reimplement bitfield access unit computation for SysV ABIs. Considers expense of unaligned and non-power-of-2 accesses.
|
Revision tags: llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4 |
|
#
cb2dd028 |
| 26-Feb-2024 |
Nathan Sidwell <nathan@acm.org> |
[clang][NFC] constify or staticify some CGRecordLowering fns (#82874)
Some CGRecordLowering functions either do not need the object or do not mutate it. Thus marking static or const as appropriate.
|
Revision tags: llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init |
|
#
f3dcc235 |
| 13-Dec-2023 |
Kazu Hirata <kazu@google.com> |
[clang] Use StringRef::{starts,ends}_with (NFC) (#75149)
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}:
[clang] Use StringRef::{starts,ends}_with (NFC) (#75149)
This patch replaces uses of StringRef::{starts,ends}with with
StringRef::{starts,ends}_with for consistency with
std::{string,string_view}::{starts,ends}_with in C++20.
I'm planning to deprecate and eventually remove
StringRef::{starts,ends}with.
show more ...
|
Revision tags: llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2 |
|
#
20488362 |
| 30-Sep-2023 |
JOE1994 <joseph942010@gmail.com> |
[NFC] Replace uses of Type::getPointerTo
Replace some uses of `Type::getPointerTo` via 2 ways * Remove entirely if it's only used to support an unnecessary bitcast (remove the bitcast as well). *
[NFC] Replace uses of Type::getPointerTo
Replace some uses of `Type::getPointerTo` via 2 ways * Remove entirely if it's only used to support an unnecessary bitcast (remove the bitcast as well). * Replace with `PointerType::get`/`PointerType::getUnqual`
NFC opaque pointer clean-up effort.
show more ...
|
Revision tags: llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2 |
|
#
fd05c34b |
| 31-Jul-2023 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
Stop using legacy helpers indicating typed pointer types. NFC
Since we no longer support typed LLVM IR pointer types, the code can be simplified into for example using PointerType::get directly inst
Stop using legacy helpers indicating typed pointer types. NFC
Since we no longer support typed LLVM IR pointer types, the code can be simplified into for example using PointerType::get directly instead of using Type::getInt8PtrTy and Type::getInt32PtrTy etc.
Differential Revision: https://reviews.llvm.org/D156733
show more ...
|
Revision tags: llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init |
|
#
af6c0b6d |
| 13-Jan-2023 |
Vladislav Dzhidzhoev <vdzhidzhoev@accesssoftek.com> |
[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping class/struct field with non-zero size as the size
[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping class/struct field with non-zero size as the size of the base subobject type corresponding to the field type. Make CGRecordLayoutBuilder to acknowledge that in order to avoid incorrect padding insertion.
Differential Revision: https://reviews.llvm.org/D139741
show more ...
|
#
bf5c17ed |
| 13-Jan-2023 |
Guillaume Chatelet <gchatelet@google.com> |
[clang][NFC] Remove dependency on DataLayout::getPrefTypeAlignment
|
Revision tags: llvmorg-15.0.7 |
|
#
19d300fb |
| 15-Dec-2022 |
Vladislav Dzhidzhoev <vdzhidzhoev@accesssoftek.com> |
Revert "[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields"
This reverts commit 731abdfdcc33d813e6c3b4b89eff307aa5c18083.
This commit breaks some tests in libcxx, e.
Revert "[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields"
This reverts commit 731abdfdcc33d813e6c3b4b89eff307aa5c18083.
This commit breaks some tests in libcxx, e.g. `std/utilities/expected/expected.expected/ctor/ctor.inplace.pass.cpp`
show more ...
|
#
731abdfd |
| 09-Dec-2022 |
Vladislav Dzhidzhoev <vdzhidzhoev@accesssoftek.com> |
[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping field with non-zero size as the size of the base
[clang][CodeGen] Use base subobject type layout for potentially-overlapping fields
RecordLayoutBuilder assumes the size of a potentially-overlapping field with non-zero size as the size of the base subobject type corresponding to the field type. Make CGRecordLayoutBuilder to acknowledge that in order to avoid incorrect padding insertion.
Differential Revision: https://reviews.llvm.org/D139741
show more ...
|
Revision tags: llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
#
17d4bd3d |
| 09-Jan-2022 |
Kazu Hirata <kazu@google.com> |
[clang] Fix bugprone argument comments (NFC)
Identified with bugprone-argument-comment.
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2 |
|
#
72f863fd |
| 19-Jan-2021 |
Bjorn Pettersson <bjorn.a.pettersson@ericsson.com> |
[CodeGen] Use getCharWidth() more consistently in CGRecordLowering. NFC
When using getByteArrayType the requested size is calculated in char units, but the type used for the array was hardcoded to t
[CodeGen] Use getCharWidth() more consistently in CGRecordLowering. NFC
When using getByteArrayType the requested size is calculated in char units, but the type used for the array was hardcoded to the Int8Ty. This patch is using getCharWIdth a bit more consistently by using getIntNTy in combination with getCharWidth, instead of explictly using getInt8Ty.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D94977
show more ...
|
Revision tags: llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3 |
|
#
101309fe |
| 24-Aug-2020 |
Bevin Hansson <bevin.hansson@ericsson.com> |
[AST] Change return type of getTypeInfoInChars to a proper struct instead of std::pair.
Followup to D85191.
This changes getTypeInfoInChars to return a TypeInfoChars struct instead of a std::pair o
[AST] Change return type of getTypeInfoInChars to a proper struct instead of std::pair.
Followup to D85191.
This changes getTypeInfoInChars to return a TypeInfoChars struct instead of a std::pair of CharUnits. This lets the interface match getTypeInfo more closely.
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D86447
show more ...
|
#
20898784 |
| 30-Sep-2020 |
Ties Stuij <ties.stuij@arm.com> |
[ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586. According to the AAPCS, volatile bit-fields should be accessed using containers of the widht o
[ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586. According to the AAPCS, volatile bit-fields should be accessed using containers of the widht of their declarative type. In such case: ``` struct S1 { short a : 1; } ``` should be accessed using load and stores of the width (sizeof(short)), where now the compiler does only load the minimum required width (char in this case). However, as discussed in D16586, that could overwrite non-volatile bit-fields, which conflicted with C and C++ object models by creating data race conditions that are not part of the bit-field, e.g. ``` struct S2 { short a; int b : 16; } ``` Accessing `S2.b` would also access `S2.a`.
The AAPCS Release 2020Q2 (https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=) section 8.1 Data Types, page 36, "Volatile bit-fields - preserving number and width of container accesses" has been updated to avoid conflict with the C++ Memory Model. Now it reads in the note: ``` This ABI does not place any restrictions on the access widths of bit-fields where the container overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field placed between two other bit-fields. This is because the C/C++ memory model defines these as being separate memory locations, which can be accessed by two threads simultaneously. For this reason, compilers must be permitted to use a narrower memory access width (including splitting the access into multiple instructions) to avoid writing to a different memory location. For example, in struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };, writes to a or b must not overwrite each other. ```
I've updated the patch D16586 to follow such behavior by verifying that we only change volatile bit-field access when: - it won't overlap with any other non-bit-field member - we only access memory inside the bounds of the record - avoid overlapping zero-length bit-fields.
Regarding the number of memory accesses, that should be preserved, that will be implemented by D67399.
Reviewed By: ostannard
Differential Revision: https://reviews.llvm.org/D72932
show more ...
|
#
d6f3f612 |
| 08-Sep-2020 |
Ties Stuij <ties.stuij@arm.com> |
Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 514df1b2bb1ecd1a33327001ea38a347fd2d0380.
Some of the buildbots got llvm-lit errors on CodeGen/volatile
Revert "[ARM] Follow AACPS standard for volatile bit-fields access width"
This reverts commit 514df1b2bb1ecd1a33327001ea38a347fd2d0380.
Some of the buildbots got llvm-lit errors on CodeGen/volatile.c
show more ...
|
#
514df1b2 |
| 28-Aug-2020 |
Ties Stuij <ties.stuij@arm.com> |
[ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586. According to the AAPCS, volatile bit-fields should be accessed using containers of the widht o
[ARM] Follow AACPS standard for volatile bit-fields access width
This patch resumes the work of D16586. According to the AAPCS, volatile bit-fields should be accessed using containers of the widht of their declarative type. In such case: ``` struct S1 { short a : 1; } ``` should be accessed using load and stores of the width (sizeof(short)), where now the compiler does only load the minimum required width (char in this case). However, as discussed in D16586, that could overwrite non-volatile bit-fields, which conflicted with C and C++ object models by creating data race conditions that are not part of the bit-field, e.g. ``` struct S2 { short a; int b : 16; } ``` Accessing `S2.b` would also access `S2.a`.
The AAPCS Release 2020Q2 (https://documentation-service.arm.com/static/5efb7fbedbdee951c1ccf186?token=) section 8.1 Data Types, page 36, "Volatile bit-fields - preserving number and width of container accesses" has been updated to avoid conflict with the C++ Memory Model. Now it reads in the note: ``` This ABI does not place any restrictions on the access widths of bit-fields where the container overlaps with a non-bit-field member or where the container overlaps with any zero length bit-field placed between two other bit-fields. This is because the C/C++ memory model defines these as being separate memory locations, which can be accessed by two threads simultaneously. For this reason, compilers must be permitted to use a narrower memory access width (including splitting the access into multiple instructions) to avoid writing to a different memory location. For example, in struct S { int a:24; char b; }; a write to a must not also write to the location occupied by b, this requires at least two memory accesses in all current Arm architectures. In the same way, in struct S { int a:24; int:0; int b:8; };, writes to a or b must not overwrite each other. ```
Patch D16586 was updated to follow such behavior by verifying that we only change volatile bit-field access when: - it won't overlap with any other non-bit-field member - we only access memory inside the bounds of the record - avoid overlapping zero-length bit-fields.
Regarding the number of memory accesses, that should be preserved, that will be implemented by D67399.
Differential Revision: https://reviews.llvm.org/D72932
The following people contributed to this patch: - Diogo Sampaio - Ties Stuij
show more ...
|
Revision tags: llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2 |
|
#
3dcfd482 |
| 12-Jun-2020 |
Alex Bradbury <asb@lowrisc.org> |
[CodeGen] Increase applicability of ffine-grained-bitfield-accesses for targets with limited native integer widths
As pointed out in PR45708, -ffine-grained-bitfield-accesses doesn't trigger in all
[CodeGen] Increase applicability of ffine-grained-bitfield-accesses for targets with limited native integer widths
As pointed out in PR45708, -ffine-grained-bitfield-accesses doesn't trigger in all cases you think it might for RISC-V. The logic in CGRecordLowering::accumulateBitFields checks OffsetInRecord is a legal integer according to the datalayout. RISC targets will typically only have the native width as a legal integer type so this check will fail for OffsetInRecord of 8 or 16 when you would expect the transformation is still worthwhile.
This patch changes the logic to check for an OffsetInRecord of a at least 1 byte, that fits in a legal integer, and is a power of 2. We would prefer to query whether native load/store operations are available, but I don't believe that is possible.
Differential Revision: https://reviews.llvm.org/D79155
show more ...
|