History log of /llvm-project/llvm/lib/CodeGen/StackProtector.cpp (Results 26 – 50 of 198)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 7f230fee 07-Mar-2022 serge-sans-paille <sguelton@redhat.com>

Cleanup codegen includes

after: 1061034926
before: 1063332844

Differential Revision: https://reviews.llvm.org/D121169


Revision tags: llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2
# dc9f65be 14-Dec-2021 John Brawn <john.brawn@arm.com>

[AArch64][SVE] Fix handling of stack protection with SVE

Fix a couple of things that were causing stack protection to not work
correctly in functions that have scalable vectors on the stack:
* Use

[AArch64][SVE] Fix handling of stack protection with SVE

Fix a couple of things that were causing stack protection to not work
correctly in functions that have scalable vectors on the stack:
* Use TypeSize when determining if accesses to a variable are
considered out-of-bounds so that the behaviour is correct for
scalable vectors.
* When stack protection is enabled move the stack protector location
to the top of the SVE locals, so that any overflow in them (or the
other locals which are below that) will be detected.

Fixes: https://github.com/llvm/llvm-project/issues/51137

Differential Revision: https://reviews.llvm.org/D111631

show more ...


Revision tags: llvmorg-13.0.1-rc1
# 7ca14f60 18-Nov-2021 Kazu Hirata <kazu@google.com>

[llvm] Use range-based for loops (NFC)


# cfef1803 26-Sep-2021 Amara Emerson <amara@apple.com>

[GlobalISel] Port over the SelectionDAG stack protector codegen feature.

This is a port of the feature that allows the StackProtector pass to omit
checking code for stack canary checks, and rely on

[GlobalISel] Port over the SelectionDAG stack protector codegen feature.

This is a port of the feature that allows the StackProtector pass to omit
checking code for stack canary checks, and rely on SelectionDAG to do it at a
later stage. The reasoning behind this seems to be to prevent the IR checking
instructions from hindering tail-call optimizations during codegen.

Here we allow GlobalISel to also use that scheme. Doing so requires that we
do some analysis using some factored-out code to determine where to generate
code for the epilogs.

Not every case is handled in this patch since we don't have support for all
targets that exercise different stack protector schemes.

Differential Revision: https://reviews.llvm.org/D98200

show more ...


Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4
# 48719e3b 18-Sep-2021 Kazu Hirata <kazu@google.com>

[CodeGen] Use make_early_inc_range (NFC)


Revision tags: llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1
# 033138ea 21-May-2021 Nick Desaulniers <ndesaulniers@google.com>

[IR] make stack-protector-guard-* flags into module attrs

D88631 added initial support for:

- -mstack-protector-guard=
- -mstack-protector-guard-reg=
- -mstack-protector-guard-offset=

flags, and D

[IR] make stack-protector-guard-* flags into module attrs

D88631 added initial support for:

- -mstack-protector-guard=
- -mstack-protector-guard-reg=
- -mstack-protector-guard-offset=

flags, and D100919 extended these to AArch64. Unfortunately, these flags
aren't retained for LTO. Make them module attributes rather than
TargetOptions.

Link: https://github.com/ClangBuiltLinux/linux/issues/1378

Reviewed By: tejohnson

Differential Revision: https://reviews.llvm.org/D102742

show more ...


# 4824d876 20-Apr-2021 Philip Reames <listmail@philipreames.com>

Revert "Allow invokable sub-classes of IntrinsicInst"

This reverts commit d87b9b81ccb95217181ce75515c6c68bbb408ca4.

Post commit review raised concerns, reverting while discussion happens.


# d87b9b81 20-Apr-2021 Philip Reames <listmail@philipreames.com>

Allow invokable sub-classes of IntrinsicInst

It used to be that all of our intrinsics were call instructions, but over time, we've added more and more invokable intrinsics. According to the verifier

Allow invokable sub-classes of IntrinsicInst

It used to be that all of our intrinsics were call instructions, but over time, we've added more and more invokable intrinsics. According to the verifier, we're up to 8 right now. As IntrinsicInst is a sub-class of CallInst, this puts us in an awkward spot where the idiomatic means to check for intrinsic has a false negative if the intrinsic is invoked.

This change switches IntrinsicInst from being a sub-class of CallInst to being a subclass of CallBase. This allows invoked intrinsics to be instances of IntrinsicInst, at the cost of requiring a few more casts to CallInst in places where the intrinsic really is known to be a call, not an invoke.

After this lands and has baked for a couple days, planned cleanups:
Make GCStatepointInst a IntrinsicInst subclass.
Merge intrinsic handling in InstCombine and use idiomatic visitIntrinsicInst entry point for InstVisitor.
Do the same in SelectionDAG.
Do the same in FastISEL.

Differential Revision: https://reviews.llvm.org/D99976

show more ...


Revision tags: llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4
# 5e3d9fcc 17-Mar-2021 Tim Northover <t.p.northover@gmail.com>

StackProtector: ensure protection does not interfere with tail call frame.

The IR stack protector pass must insert stack checks before the call instead of
between it and the return.

Similarly, SDAG

StackProtector: ensure protection does not interfere with tail call frame.

The IR stack protector pass must insert stack checks before the call instead of
between it and the return.

Similarly, SDAG one should recognize that ADJCALLFRAME instructions could be
part of the terminal sequence of a tail call. In this case because such call
frames cannot be nested in LLVM the stack protection code must skip over the
whole sequence (or risk clobbering argument registers).

show more ...


Revision tags: llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2
# 1cb47a06 08-Feb-2021 Hongtao Yu <hoy@fb.com>

[CSSPGO] Unblock optimizations with pseudo probe instrumentation.

The IR/MIR pseudo probe intrinsics don't get materialized into real machine instructions and therefore they don't incur runtime cost

[CSSPGO] Unblock optimizations with pseudo probe instrumentation.

The IR/MIR pseudo probe intrinsics don't get materialized into real machine instructions and therefore they don't incur runtime cost directly. However, they come with indirect cost by blocking certain optimizations. Some of the blocking are intentional (such as blocking code merge) for better counts quality while the others are accidental. This change unblocks perf-critical optimizations that do not affect counts quality. They include:

1. IR InstCombine, sinking load operation to shorten lifetimes.
2. MIR LiveRangeShrink, similar to #1
3. MIR TwoAddressInstructionPass, i.e, opeq transform
4. MIR function argument copy elision
5. IR stack protection. (though not perf-critical but nice to have).

Reviewed By: wmi

Differential Revision: https://reviews.llvm.org/D95982

show more ...


Revision tags: llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1
# 4de3bdd6 27-Jan-2021 Roman Lebedev <lebedev.ri@gmail.com>

[NFC] StackProtector: be consistent and to initialize DominatorTreeWrapperPass

We already ask for it, so it might be good to ensure that it is
actually initialized before us. Doesn't seem to matter

[NFC] StackProtector: be consistent and to initialize DominatorTreeWrapperPass

We already ask for it, so it might be good to ensure that it is
actually initialized before us. Doesn't seem to matter in practice though.

show more ...


Revision tags: llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2
# 8c4e5576 15-Dec-2020 Fangrui Song <i@maskray.me>

[docs][unittest][Go][StackProtector] Migrate deprecated DebugInfo::get to DILocation::get


# bc044a88 02-Dec-2020 Nick Desaulniers <ndesaulniers@google.com>

[Inline] prevent inlining on stack protector mismatch

It's common for code that manipulates the stack via inline assembly or
that has to set up its own stack canary (such as the Linux kernel) would

[Inline] prevent inlining on stack protector mismatch

It's common for code that manipulates the stack via inline assembly or
that has to set up its own stack canary (such as the Linux kernel) would
like to avoid stack protectors in certain functions. In this case, we've
been bitten by numerous bugs where a callee with a stack protector is
inlined into an attribute((no_stack_protector)) caller, which
generally breaks the caller's assumptions about not having a stack
protector. LTO exacerbates the issue.

While developers can avoid this by putting all no_stack_protector
functions in one translation unit together and compiling those with
-fno-stack-protector, it's generally not very ergonomic or as
ergonomic as a function attribute, and still doesn't work for LTO. See also:
https://lore.kernel.org/linux-pm/20200915172658.1432732-1-rkir@google.com/
https://lore.kernel.org/lkml/20200918201436.2932360-30-samitolvanen@google.com/T/#u

SSP attributes can be ordered by strength. Weakest to strongest, they
are: ssp, sspstrong, sspreq. Callees with differing SSP attributes may be
inlined into each other, and the strongest attribute will be applied to the
caller. (No change)

After this change:
* A callee with no SSP attributes will no longer be inlined into a
caller with SSP attributes.
* The reverse is also true: a callee with an SSP attribute will not be
inlined into a caller with no SSP attributes.
* The alwaysinline attribute overrides these rules.

Functions that get synthesized by the compiler may not get inlined as a
result if they are not created with the same stack protector function
attribute as their callers.

Alternative approach to https://reviews.llvm.org/D87956.

Fixes pr/47479.

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>

Reviewed By: rnk, MaskRay

Differential Revision: https://reviews.llvm.org/D91816

show more ...


Revision tags: llvmorg-11.0.1-rc1
# f4c6080a 18-Nov-2020 Nick Desaulniers <ndesaulniers@google.com>

Revert "[IR] add fn attr for no_stack_protector; prevent inlining on mismatch"

This reverts commit b7926ce6d7a83cdf70c68d82bc3389c04009b841.

Going with a simpler approach.


# b7926ce6 23-Oct-2020 Nick Desaulniers <ndesaulniers@google.com>

[IR] add fn attr for no_stack_protector; prevent inlining on mismatch

It's currently ambiguous in IR whether the source language explicitly
did not want a stack a stack protector (in C, via function

[IR] add fn attr for no_stack_protector; prevent inlining on mismatch

It's currently ambiguous in IR whether the source language explicitly
did not want a stack a stack protector (in C, via function attribute
no_stack_protector) or doesn't care for any given function.

It's common for code that manipulates the stack via inline assembly or
that has to set up its own stack canary (such as the Linux kernel) would
like to avoid stack protectors in certain functions. In this case, we've
been bitten by numerous bugs where a callee with a stack protector is
inlined into an __attribute__((__no_stack_protector__)) caller, which
generally breaks the caller's assumptions about not having a stack
protector. LTO exacerbates the issue.

While developers can avoid this by putting all no_stack_protector
functions in one translation unit together and compiling those with
-fno-stack-protector, it's generally not very ergonomic or as
ergonomic as a function attribute, and still doesn't work for LTO. See also:
https://lore.kernel.org/linux-pm/20200915172658.1432732-1-rkir@google.com/
https://lore.kernel.org/lkml/20200918201436.2932360-30-samitolvanen@google.com/T/#u

Typically, when inlining a callee into a caller, the caller will be
upgraded in its level of stack protection (see adjustCallerSSPLevel()).
By adding an explicit attribute in the IR when the function attribute is
used in the source language, we can now identify such cases and prevent
inlining. Block inlining when the callee and caller differ in the case that one
contains `nossp` when the other has `ssp`, `sspstrong`, or `sspreq`.

Fixes pr/47479.

Reviewed By: void

Differential Revision: https://reviews.llvm.org/D87956

show more ...


# 7c3fea77 22-Oct-2020 Xiang1 Zhang <xiang1.zhang@intel.com>

[X86] Support customizing stack protector guard

Reviewed By: nickdesaulniers, MaskRay

Differential Revision: https://reviews.llvm.org/D88631


Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3
# 2878ecc9 03-Sep-2020 Amara Emerson <amara@apple.com>

[StackProtector] Fix crash with vararg due to not checking LocationSize validity.

Differential Revision: https://reviews.llvm.org/D87074


Revision tags: llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1
# df880b77 27-Jul-2020 Nadav Rotem <nadav256@gmail.com>

[StackProtector] Speed up RequiresStackProtector

Speed up the method RequiresStackProtector by checking the intrinsic
value of the call. The original code calls getName() that returns an
allocating

[StackProtector] Speed up RequiresStackProtector

Speed up the method RequiresStackProtector by checking the intrinsic
value of the call. The original code calls getName() that returns an
allocating std::string on each check. This change removes about 96072
std::string instances when compiling sqlite3.c; The function was
discovered with a Facebook-internal performance tool.

Differential Revision: https://reviews.llvm.org/D84620

show more ...


Revision tags: llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1, llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4
# c0936831 05-Mar-2020 John Brawn <john.brawn@arm.com>

[StackProtector] Catch direct out-of-bounds when checking address-takenness

With -fstack-protector-strong we check if a non-array variable has its address
taken in a way that could cause a potential

[StackProtector] Catch direct out-of-bounds when checking address-takenness

With -fstack-protector-strong we check if a non-array variable has its address
taken in a way that could cause a potential out-of-bounds access. However what
we don't catch is when the address is directly used to create an out-of-bounds
memory access.

Fix this by examining the offsets of GEPs that are ultimately derived from
allocas and checking if the resulting address is out-of-bounds, and by checking
that any memory operations using such addresses are not over-large.

Fixes PR43478.

Differential revision: https://reviews.llvm.org/D75695

show more ...


Revision tags: llvmorg-10.0.0-rc3, llvmorg-10.0.0-rc2, llvmorg-10.0.0-rc1, llvmorg-11-init, llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1
# 05da2fe5 13-Nov-2019 Reid Kleckner <rnk@google.com>

Sink all InitializePasses.h includes

This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of reco

Sink all InitializePasses.h includes

This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.

I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h

Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.

Reviewers: bkramer, asbirlea, bollu, jdoerfert

Differential Revision: https://reviews.llvm.org/D70211

show more ...


# ed1f3f36 30-Sep-2019 Paul Robinson <paul.robinson@sony.com>

[SSP] [3/3] cmpxchg and addrspacecast instructions can now
trigger stack protectors. Fixes PR42238.

Add test coverage for llvm.memset, as proxy for all llvm.mem*
intrinsics. There are two issues he

[SSP] [3/3] cmpxchg and addrspacecast instructions can now
trigger stack protectors. Fixes PR42238.

Add test coverage for llvm.memset, as proxy for all llvm.mem*
intrinsics. There are two issues here: (1) they could be lowered to a
libc call, which could be intercepted, and do Bad Stuff; (2) with a
non-constant size, they could overwrite the current stack frame.

The test was mostly written by Matt Arsenault in r363169, which was
later reverted; I tweaked what he had and added the llvm.memset part.

Differential Revision: https://reviews.llvm.org/D67845

llvm-svn: 373220

show more ...


# 527815f5 30-Sep-2019 Paul Robinson <paul.robinson@sony.com>

[SSP] [2/3] Refactor an if/dyn_cast chain to switch on opcode. NFC

Differential Revision: https://reviews.llvm.org/D67844

llvm-svn: 373219


# 14945186 30-Sep-2019 Paul Robinson <paul.robinson@sony.com>

[SSP] [1/3] Revert "StackProtector: Use PointerMayBeCaptured"
"Captured" and "relevant to Stack Protector" are not the same thing.

This reverts commit f29366b1f594f48465c5a2754bcffac6d70fd0b1.
aka r

[SSP] [1/3] Revert "StackProtector: Use PointerMayBeCaptured"
"Captured" and "relevant to Stack Protector" are not the same thing.

This reverts commit f29366b1f594f48465c5a2754bcffac6d70fd0b1.
aka r363169.

Differential Revision: https://reviews.llvm.org/D67842

llvm-svn: 373216

show more ...


Revision tags: llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5, llvmorg-9.0.0-rc4, llvmorg-9.0.0-rc3, llvmorg-9.0.0-rc2, llvmorg-9.0.0-rc1, llvmorg-10-init, llvmorg-8.0.1, llvmorg-8.0.1-rc4, llvmorg-8.0.1-rc3
# f29366b1 12-Jun-2019 Matt Arsenault <Matthew.Arsenault@amd.com>

StackProtector: Use PointerMayBeCaptured

This was using its own, outdated list of possible captures. This was
at minimum not catching cmpxchg and addrspacecast captures.

One change is now any volat

StackProtector: Use PointerMayBeCaptured

This was using its own, outdated list of possible captures. This was
at minimum not catching cmpxchg and addrspacecast captures.

One change is now any volatile access is treated as capturing. The
test coverage for this pass is quite inadequate, but this required
removing volatile in the lifetime capture test.

Also fixes some infrastructure issues to allow running just the IR
pass.

Fixes bug 42238.

llvm-svn: 363169

show more ...


Revision tags: llvmorg-8.0.1-rc2, llvmorg-8.0.1-rc1
# 2c5c12c0 05-Apr-2019 Fangrui Song <maskray@google.com>

Change some dyn_cast to more apropriate isa. NFC

llvm-svn: 357773


12345678