History log of /llvm-project/llvm/lib/CodeGen/StackProtector.cpp (Results 76 – 100 of 198)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# b050c7fb 11-Apr-2017 Diana Picus <diana.picus@linaro.org>

Revert "Turn some C-style vararg into variadic templates"

This reverts commit r299925 because it broke the buildbots. See e.g.
http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15/builds/6008

ll

Revert "Turn some C-style vararg into variadic templates"

This reverts commit r299925 because it broke the buildbots. See e.g.
http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15/builds/6008

llvm-svn: 299928

show more ...


# 5fd75fb7 11-Apr-2017 Serge Guelton <sguelton@quarkslab.com>

Turn some C-style vararg into variadic templates

Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.

From a user prospective, it forces the use of an annoying nullptr

Turn some C-style vararg into variadic templates

Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.

From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.

llvm-svn: 299925

show more ...


# db11fdfd 06-Apr-2017 Mehdi Amini <mehdi.amini@apple.com>

Revert "Turn some C-style vararg into variadic templates"

This reverts commit r299699, the examples needs to be updated.

llvm-svn: 299702


# 579540a8 06-Apr-2017 Mehdi Amini <mehdi.amini@apple.com>

Turn some C-style vararg into variadic templates

Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.

From a user prospective, it forces the use of an annoying nullptr

Turn some C-style vararg into variadic templates

Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.

From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.

Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>

Differential Revision: https://reviews.llvm.org/D31070

llvm-svn: 299699

show more ...


# 5361b82d 09-Mar-2017 Adam Nemet <anemet@apple.com>

[SSP] In opt remarks, stream Function directly

With this, it shows up as an attribute in YAML and non-printable characters
are properly removed by GlobalValue::getRealLinkageName.

llvm-svn: 297362


Revision tags: llvmorg-4.0.0, llvmorg-4.0.0-rc4, llvmorg-4.0.0-rc3
# 51599687 28-Feb-2017 David Bozier <seifsta@gmail.com>

[Stack Protection] Add diagnostic information for why stack protection was applied to a function

Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause perfo

[Stack Protection] Add diagnostic information for why stack protection was applied to a function

Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause performance issues. By adding diagnostic information for which functions have SSP and why, a user can quickly determine what they can do to stop SSP being applied to a specific hot function.

This change adds a remark that is reported by the stack protection code when an instruction or attribute is encountered that causes SSP to be applied.

Patch by: James Henderson

Differential Revision: https://reviews.llvm.org/D29023

llvm-svn: 296483

show more ...


# db56e5a8 22-Feb-2017 Eugene Zelenko <eugene.zelenko@gmail.com>

[CodeGen] Fix some Clang-tidy modernize and Include What You Use warnings; other minor fixes (NFC).

llvm-svn: 295893


# 93e773e9 09-Feb-2017 David Bozier <seifsta@gmail.com>

Revert: "[Stack Protection] Add diagnostic information for why stack protection was applied to a function"

this reverts revision r294590 as it broke some buildbots.

llvm-svn: 294593


# 6a44b7c2 09-Feb-2017 David Bozier <seifsta@gmail.com>

[Stack Protection] Add diagnostic information for why stack protection was applied to a function

Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause perfo

[Stack Protection] Add diagnostic information for why stack protection was applied to a function

Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause performance issues. By adding diagnostic information for which function have SSP and why, a user can quickly determine what they can do to stop SSP being applied to a specific hot function.

This change adds an SSP-specific DiagnosticInfo class and uses of it to the Stack Protection code. A subsequent change to clang will cause the remarks to be emitted when enabled.

Patch by: James Henderson

Differential Revision: https://reviews.llvm.org/D29023

llvm-svn: 294590

show more ...


Revision tags: llvmorg-4.0.0-rc2, llvmorg-4.0.0-rc1, llvmorg-3.9.1, llvmorg-3.9.1-rc3, llvmorg-3.9.1-rc2, llvmorg-3.9.1-rc1
# b44b6d98 20-Sep-2016 George Burgess IV <george.burgess.iv@gmail.com>

[CodeGen] stop short-circuiting the SSP code for sspstrong.

This check caused us to skip adding layout information for calls to
alloca in sspreq/sspstrong mode. We check properly for sspstrong later

[CodeGen] stop short-circuiting the SSP code for sspstrong.

This check caused us to skip adding layout information for calls to
alloca in sspreq/sspstrong mode. We check properly for sspstrong later
on (and add the correct layout info when doing so), so removing this
shouldn't hurt.

No test is included, since testing this using lit seems to require
checking for exact offsets in asm, which is something that the lit tests
for this avoid. If someone cares deeply, I'm happy to write a unittest
or something to cover this, but that feels like overkill.

Patch by Daniel Micay.

Differential Revision: https://reviews.llvm.org/D22714

llvm-svn: 282022

show more ...


# 0a020f0f 14-Sep-2016 Silviu Baranga <silviu.baranga@arm.com>

[StackProtector] Use INITIALIZE_TM_PASS instead of INITIALIZE_PASS
in order to make sure that its TargetMachine constructor is
registered.

This allows us to run the PEI machine pass with MIR input
(

[StackProtector] Use INITIALIZE_TM_PASS instead of INITIALIZE_PASS
in order to make sure that its TargetMachine constructor is
registered.

This allows us to run the PEI machine pass with MIR input
(see PR30324).

llvm-svn: 281474

show more ...


Revision tags: llvmorg-3.9.0, llvmorg-3.9.0-rc3, llvmorg-3.9.0-rc2, llvmorg-3.9.0-rc1
# b386955a 30-Jun-2016 Yunzhong Gao <Yunzhong.Gao@sony.com>

Add an artificial line-0 debug location when the compiler emits a call to
__stack_chk_fail(). This avoids a compiler crash.

Differential Revision: http://reviews.llvm.org/D21818

llvm-svn: 274263


# db6bd021 30-Jun-2016 Rafael Espindola <rafael.espindola@gmail.com>

Delete unused includes. NFC.

llvm-svn: 274225


# 22bfa832 07-Jun-2016 Etienne Bergeron <etienneb@google.com>

[stack-protection] Add support for MSVC buffer security check

Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the

[stack-protection] Add support for MSVC buffer security check

Summary:
This patch is adding support for the MSVC buffer security check implementation

The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347

Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html


For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```

The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```

The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.

Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.

* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.

* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.

* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.

Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).

Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)

Results

* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```

```
*** Final LLVM Code input to ISel ***

; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```

* SelectionDAG generated instrumentation:

```
clang-cl /GS test.cc /O1 /c /FA
```

```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```

Reviewers: kcc, pcc, eugenis, rnk

Subscribers: majnemer, llvm-commits, hans, thakis, rnk

Differential Revision: http://reviews.llvm.org/D20346

llvm-svn: 272053

show more ...


Revision tags: llvmorg-3.8.1, llvmorg-3.8.1-rc1
# e885d5e4 19-Apr-2016 Tim Shen <timshen91@gmail.com>

[SSP, 2/2] Create llvm.stackguard() intrinsic and lower it to LOAD_STACK_GUARD

With this change, ideally IR pass can always generate llvm.stackguard
call to get the stack guard; but for now there ar

[SSP, 2/2] Create llvm.stackguard() intrinsic and lower it to LOAD_STACK_GUARD

With this change, ideally IR pass can always generate llvm.stackguard
call to get the stack guard; but for now there are still IR form stack
guard customizations around (see getIRStackGuard()). Future SSP
customization should go through LOAD_STACK_GUARD.

There is a behavior change: stack guard values are not CSEed anymore,
since we should never reuse the value in case that it has been spilled (and
corrupted). See ssp-guard-spill.ll. This also cause the change of stack
size and codegen in X86 and AArch64 test cases.

Ideally we'd like to know if the guard created in llvm.stackprotector() gets
spilled or not. If the value is spilled, discard the value and reload
stack guard; otherwise reuse the value. This can be done by teaching
register allocator to know how to rematerialize LOAD_STACK_GUARD and
force a rematerialization (which seems hard), or check for spilling in
expandPostRAPseudo. It only makes sense when the stack guard is a global
variable, which requires more instructions to load. Anyway, this seems to go out
of the scope of the current patch.

llvm-svn: 266806

show more ...


# f17120a8 11-Apr-2016 Evgeniy Stepanov <eugeni.stepanov@gmail.com>

[safestack] Add canary to unsafe stack frames

Add StackProtector to SafeStack. This adds limited protection against
data corruption in the caller frame. Current implementation treats
all stack prote

[safestack] Add canary to unsafe stack frames

Add StackProtector to SafeStack. This adds limited protection against
data corruption in the caller frame. Current implementation treats
all stack protector levels as -fstack-protector-all.

llvm-svn: 266004

show more ...


# 00127564 08-Apr-2016 Tim Shen <timshen91@gmail.com>

[SSP] Remove llvm.stackprotectorcheck.

This is a cleanup patch for SSP support in LLVM. There is no functional change.
llvm.stackprotectorcheck is not needed, because SelectionDAG isn't
actually low

[SSP] Remove llvm.stackprotectorcheck.

This is a cleanup patch for SSP support in LLVM. There is no functional change.
llvm.stackprotectorcheck is not needed, because SelectionDAG isn't
actually lowering it in SelectBasicBlock; rather, it adds check code in
FinishBasicBlock, ignoring the position where the intrinsic is inserted
(See FindSplitPointForStackProtector()).

llvm-svn: 265851

show more ...


# dde29e27 05-Apr-2016 Evgeniy Stepanov <eugeni.stepanov@gmail.com>

Faster stack-protector for Android/AArch64.

Bionic has a defined thread-local location for the stack protector
cookie. Emit a direct load instead of going through __stack_chk_guard.

llvm-svn: 265481


Revision tags: llvmorg-3.8.0, llvmorg-3.8.0-rc3, llvmorg-3.8.0-rc2, llvmorg-3.8.0-rc1
# e93b8e15 22-Dec-2015 Cong Hou <congh@google.com>

[BPI] Replace weights by probabilities in BPI.

This patch removes all weight-related interfaces from BPI and replace
them by probability versions. With this patch, we won't use edge weight
anymore i

[BPI] Replace weights by probabilities in BPI.

This patch removes all weight-related interfaces from BPI and replace
them by probability versions. With this patch, we won't use edge weight
anymore in either IR or MC passes. Edge probabilitiy is a better
representation in terms of CFG update and validation.


Differential revision: http://reviews.llvm.org/D15519

llvm-svn: 256263

show more ...


Revision tags: llvmorg-3.7.1, llvmorg-3.7.1-rc2, llvmorg-3.7.1-rc1
# 84921b98 24-Oct-2015 Rafael Espindola <rafael.espindola@gmail.com>

Refactor: Simplify boolean conditional return statements in lib/CodeGen.

Patch by Richard.

llvm-svn: 251213


# f1ff53ec 09-Oct-2015 Duncan P. N. Exon Smith <dexonsmith@apple.com>

CodeGen: Remove implicit ilist iterator conversions, NFC

Finish removing implicit ilist iterator conversions from LLVMCodeGen.
I'm sure there are lots more of these in lib/CodeGen/*/.

llvm-svn: 249

CodeGen: Remove implicit ilist iterator conversions, NFC

Finish removing implicit ilist iterator conversions from LLVMCodeGen.
I'm sure there are lots more of these in lib/CodeGen/*/.

llvm-svn: 249915

show more ...


Revision tags: llvmorg-3.7.0, llvmorg-3.7.0-rc4, llvmorg-3.7.0-rc3, studio-1.4, llvmorg-3.7.0-rc2, llvmorg-3.7.0-rc1
# ed6edbf1 07-Jul-2015 Mehdi Amini <mehdi.amini@apple.com>

Redirect DataLayout from TargetMachine to Module in StackProtector

Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the on

Redirect DataLayout from TargetMachine to Module in StackProtector

Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.

Reviewers: echristo

Subscribers: llvm-commits, rafael, yaron.keren

Differential Revision: http://reviews.llvm.org/D11010

From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 241646

show more ...


Revision tags: llvmorg-3.6.2, llvmorg-3.6.2-rc1
# ff6409d0 18-May-2015 David Blaikie <dblaikie@gmail.com>

Simplify IRBuilder::CreateCall* by using ArrayRef+initializer_list/braced init only

llvm-svn: 237624


Revision tags: llvmorg-3.6.1, llvmorg-3.6.1-rc1, llvmorg-3.5.2, llvmorg-3.5.2-rc1, llvmorg-3.6.0, llvmorg-3.6.0-rc4
# 70eb9c5a 14-Feb-2015 Duncan P. N. Exon Smith <dexonsmith@apple.com>

CodeGen: Canonicalize access to function attributes, NFC

Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
=> getF

CodeGen: Canonicalize access to function attributes, NFC

Canonicalize access to function attributes to use the simpler API.

getAttributes().getAttribute(AttributeSet::FunctionIndex, Kind)
=> getFnAttribute(Kind)

getAttributes().hasAttribute(AttributeSet::FunctionIndex, Kind)
=> hasFnAttribute(Kind)

Also, add `Function::getFnStackAlignment()`, and canonicalize:

getAttributes().getStackAlignment(AttributeSet::FunctionIndex)
=> getFnStackAlignment()

llvm-svn: 229208

show more ...


Revision tags: llvmorg-3.6.0-rc3, llvmorg-3.6.0-rc2
# 33726206 27-Jan-2015 Eric Christopher <echristo@gmail.com>

Replace some uses of getSubtargetImpl with the cached version
off of the MachineFunction or with the version that takes a
Function reference as an argument.

llvm-svn: 227185


12345678