1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwriting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 // 141 // FIXME: This sanitizer does not yet handle scalable vectors 142 // 143 //===----------------------------------------------------------------------===// 144 145 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 146 #include "llvm/ADT/APInt.h" 147 #include "llvm/ADT/ArrayRef.h" 148 #include "llvm/ADT/DepthFirstIterator.h" 149 #include "llvm/ADT/SmallSet.h" 150 #include "llvm/ADT/SmallString.h" 151 #include "llvm/ADT/SmallVector.h" 152 #include "llvm/ADT/StringExtras.h" 153 #include "llvm/ADT/StringRef.h" 154 #include "llvm/ADT/Triple.h" 155 #include "llvm/Analysis/TargetLibraryInfo.h" 156 #include "llvm/Analysis/ValueTracking.h" 157 #include "llvm/IR/Argument.h" 158 #include "llvm/IR/Attributes.h" 159 #include "llvm/IR/BasicBlock.h" 160 #include "llvm/IR/CallingConv.h" 161 #include "llvm/IR/Constant.h" 162 #include "llvm/IR/Constants.h" 163 #include "llvm/IR/DataLayout.h" 164 #include "llvm/IR/DerivedTypes.h" 165 #include "llvm/IR/Function.h" 166 #include "llvm/IR/GlobalValue.h" 167 #include "llvm/IR/GlobalVariable.h" 168 #include "llvm/IR/IRBuilder.h" 169 #include "llvm/IR/InlineAsm.h" 170 #include "llvm/IR/InstVisitor.h" 171 #include "llvm/IR/InstrTypes.h" 172 #include "llvm/IR/Instruction.h" 173 #include "llvm/IR/Instructions.h" 174 #include "llvm/IR/IntrinsicInst.h" 175 #include "llvm/IR/Intrinsics.h" 176 #include "llvm/IR/IntrinsicsX86.h" 177 #include "llvm/IR/LLVMContext.h" 178 #include "llvm/IR/MDBuilder.h" 179 #include "llvm/IR/Module.h" 180 #include "llvm/IR/Type.h" 181 #include "llvm/IR/Value.h" 182 #include "llvm/IR/ValueMap.h" 183 #include "llvm/InitializePasses.h" 184 #include "llvm/Pass.h" 185 #include "llvm/Support/AtomicOrdering.h" 186 #include "llvm/Support/Casting.h" 187 #include "llvm/Support/CommandLine.h" 188 #include "llvm/Support/Compiler.h" 189 #include "llvm/Support/Debug.h" 190 #include "llvm/Support/ErrorHandling.h" 191 #include "llvm/Support/MathExtras.h" 192 #include "llvm/Support/raw_ostream.h" 193 #include "llvm/Transforms/Instrumentation.h" 194 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 195 #include "llvm/Transforms/Utils/Local.h" 196 #include "llvm/Transforms/Utils/ModuleUtils.h" 197 #include <algorithm> 198 #include <cassert> 199 #include <cstddef> 200 #include <cstdint> 201 #include <memory> 202 #include <string> 203 #include <tuple> 204 205 using namespace llvm; 206 207 #define DEBUG_TYPE "msan" 208 209 static const unsigned kOriginSize = 4; 210 static const Align kMinOriginAlignment = Align(4); 211 static const Align kShadowTLSAlignment = Align(8); 212 213 // These constants must be kept in sync with the ones in msan.h. 214 static const unsigned kParamTLSSize = 800; 215 static const unsigned kRetvalTLSSize = 800; 216 217 // Accesses sizes are powers of two: 1, 2, 4, 8. 218 static const size_t kNumberOfAccessSizes = 4; 219 220 /// Track origins of uninitialized values. 221 /// 222 /// Adds a section to MemorySanitizer report that points to the allocation 223 /// (stack or heap) the uninitialized bits came from originally. 224 static cl::opt<int> ClTrackOrigins("msan-track-origins", 225 cl::desc("Track origins (allocation sites) of poisoned memory"), 226 cl::Hidden, cl::init(0)); 227 228 static cl::opt<bool> ClKeepGoing("msan-keep-going", 229 cl::desc("keep going after reporting a UMR"), 230 cl::Hidden, cl::init(false)); 231 232 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 233 cl::desc("poison uninitialized stack variables"), 234 cl::Hidden, cl::init(true)); 235 236 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 237 cl::desc("poison uninitialized stack variables with a call"), 238 cl::Hidden, cl::init(false)); 239 240 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 241 cl::desc("poison uninitialized stack variables with the given pattern"), 242 cl::Hidden, cl::init(0xff)); 243 244 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 245 cl::desc("poison undef temps"), 246 cl::Hidden, cl::init(true)); 247 248 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 249 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 250 cl::Hidden, cl::init(true)); 251 252 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 253 cl::desc("exact handling of relational integer ICmp"), 254 cl::Hidden, cl::init(false)); 255 256 static cl::opt<bool> ClHandleLifetimeIntrinsics( 257 "msan-handle-lifetime-intrinsics", 258 cl::desc( 259 "when possible, poison scoped variables at the beginning of the scope " 260 "(slower, but more precise)"), 261 cl::Hidden, cl::init(true)); 262 263 // When compiling the Linux kernel, we sometimes see false positives related to 264 // MSan being unable to understand that inline assembly calls may initialize 265 // local variables. 266 // This flag makes the compiler conservatively unpoison every memory location 267 // passed into an assembly call. Note that this may cause false positives. 268 // Because it's impossible to figure out the array sizes, we can only unpoison 269 // the first sizeof(type) bytes for each type* pointer. 270 // The instrumentation is only enabled in KMSAN builds, and only if 271 // -msan-handle-asm-conservative is on. This is done because we may want to 272 // quickly disable assembly instrumentation when it breaks. 273 static cl::opt<bool> ClHandleAsmConservative( 274 "msan-handle-asm-conservative", 275 cl::desc("conservative handling of inline assembly"), cl::Hidden, 276 cl::init(true)); 277 278 // This flag controls whether we check the shadow of the address 279 // operand of load or store. Such bugs are very rare, since load from 280 // a garbage address typically results in SEGV, but still happen 281 // (e.g. only lower bits of address are garbage, or the access happens 282 // early at program startup where malloc-ed memory is more likely to 283 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 284 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 285 cl::desc("report accesses through a pointer which has poisoned shadow"), 286 cl::Hidden, cl::init(true)); 287 288 static cl::opt<bool> ClEagerChecks( 289 "msan-eager-checks", 290 cl::desc("check arguments and return values at function call boundaries"), 291 cl::Hidden, cl::init(false)); 292 293 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 294 cl::desc("print out instructions with default strict semantics"), 295 cl::Hidden, cl::init(false)); 296 297 static cl::opt<int> ClInstrumentationWithCallThreshold( 298 "msan-instrumentation-with-call-threshold", 299 cl::desc( 300 "If the function being instrumented requires more than " 301 "this number of checks and origin stores, use callbacks instead of " 302 "inline checks (-1 means never use callbacks)."), 303 cl::Hidden, cl::init(3500)); 304 305 static cl::opt<bool> 306 ClEnableKmsan("msan-kernel", 307 cl::desc("Enable KernelMemorySanitizer instrumentation"), 308 cl::Hidden, cl::init(false)); 309 310 static cl::opt<bool> 311 ClDisableChecks("msan-disable-checks", 312 cl::desc("Apply no_sanitize to the whole file"), cl::Hidden, 313 cl::init(false)); 314 315 // This is an experiment to enable handling of cases where shadow is a non-zero 316 // compile-time constant. For some unexplainable reason they were silently 317 // ignored in the instrumentation. 318 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 319 cl::desc("Insert checks for constant shadow values"), 320 cl::Hidden, cl::init(false)); 321 322 // This is off by default because of a bug in gold: 323 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 324 static cl::opt<bool> ClWithComdat("msan-with-comdat", 325 cl::desc("Place MSan constructors in comdat sections"), 326 cl::Hidden, cl::init(false)); 327 328 // These options allow to specify custom memory map parameters 329 // See MemoryMapParams for details. 330 static cl::opt<uint64_t> ClAndMask("msan-and-mask", 331 cl::desc("Define custom MSan AndMask"), 332 cl::Hidden, cl::init(0)); 333 334 static cl::opt<uint64_t> ClXorMask("msan-xor-mask", 335 cl::desc("Define custom MSan XorMask"), 336 cl::Hidden, cl::init(0)); 337 338 static cl::opt<uint64_t> ClShadowBase("msan-shadow-base", 339 cl::desc("Define custom MSan ShadowBase"), 340 cl::Hidden, cl::init(0)); 341 342 static cl::opt<uint64_t> ClOriginBase("msan-origin-base", 343 cl::desc("Define custom MSan OriginBase"), 344 cl::Hidden, cl::init(0)); 345 346 const char kMsanModuleCtorName[] = "msan.module_ctor"; 347 const char kMsanInitName[] = "__msan_init"; 348 349 namespace { 350 351 // Memory map parameters used in application-to-shadow address calculation. 352 // Offset = (Addr & ~AndMask) ^ XorMask 353 // Shadow = ShadowBase + Offset 354 // Origin = OriginBase + Offset 355 struct MemoryMapParams { 356 uint64_t AndMask; 357 uint64_t XorMask; 358 uint64_t ShadowBase; 359 uint64_t OriginBase; 360 }; 361 362 struct PlatformMemoryMapParams { 363 const MemoryMapParams *bits32; 364 const MemoryMapParams *bits64; 365 }; 366 367 } // end anonymous namespace 368 369 // i386 Linux 370 static const MemoryMapParams Linux_I386_MemoryMapParams = { 371 0x000080000000, // AndMask 372 0, // XorMask (not used) 373 0, // ShadowBase (not used) 374 0x000040000000, // OriginBase 375 }; 376 377 // x86_64 Linux 378 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 379 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 380 0x400000000000, // AndMask 381 0, // XorMask (not used) 382 0, // ShadowBase (not used) 383 0x200000000000, // OriginBase 384 #else 385 0, // AndMask (not used) 386 0x500000000000, // XorMask 387 0, // ShadowBase (not used) 388 0x100000000000, // OriginBase 389 #endif 390 }; 391 392 // mips64 Linux 393 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 394 0, // AndMask (not used) 395 0x008000000000, // XorMask 396 0, // ShadowBase (not used) 397 0x002000000000, // OriginBase 398 }; 399 400 // ppc64 Linux 401 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 402 0xE00000000000, // AndMask 403 0x100000000000, // XorMask 404 0x080000000000, // ShadowBase 405 0x1C0000000000, // OriginBase 406 }; 407 408 // s390x Linux 409 static const MemoryMapParams Linux_S390X_MemoryMapParams = { 410 0xC00000000000, // AndMask 411 0, // XorMask (not used) 412 0x080000000000, // ShadowBase 413 0x1C0000000000, // OriginBase 414 }; 415 416 // aarch64 Linux 417 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 418 0, // AndMask (not used) 419 0x06000000000, // XorMask 420 0, // ShadowBase (not used) 421 0x01000000000, // OriginBase 422 }; 423 424 // i386 FreeBSD 425 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 426 0x000180000000, // AndMask 427 0x000040000000, // XorMask 428 0x000020000000, // ShadowBase 429 0x000700000000, // OriginBase 430 }; 431 432 // x86_64 FreeBSD 433 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 434 0xc00000000000, // AndMask 435 0x200000000000, // XorMask 436 0x100000000000, // ShadowBase 437 0x380000000000, // OriginBase 438 }; 439 440 // x86_64 NetBSD 441 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 442 0, // AndMask 443 0x500000000000, // XorMask 444 0, // ShadowBase 445 0x100000000000, // OriginBase 446 }; 447 448 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 449 &Linux_I386_MemoryMapParams, 450 &Linux_X86_64_MemoryMapParams, 451 }; 452 453 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 454 nullptr, 455 &Linux_MIPS64_MemoryMapParams, 456 }; 457 458 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 459 nullptr, 460 &Linux_PowerPC64_MemoryMapParams, 461 }; 462 463 static const PlatformMemoryMapParams Linux_S390_MemoryMapParams = { 464 nullptr, 465 &Linux_S390X_MemoryMapParams, 466 }; 467 468 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 469 nullptr, 470 &Linux_AArch64_MemoryMapParams, 471 }; 472 473 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 474 &FreeBSD_I386_MemoryMapParams, 475 &FreeBSD_X86_64_MemoryMapParams, 476 }; 477 478 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 479 nullptr, 480 &NetBSD_X86_64_MemoryMapParams, 481 }; 482 483 namespace { 484 485 /// Instrument functions of a module to detect uninitialized reads. 486 /// 487 /// Instantiating MemorySanitizer inserts the msan runtime library API function 488 /// declarations into the module if they don't exist already. Instantiating 489 /// ensures the __msan_init function is in the list of global constructors for 490 /// the module. 491 class MemorySanitizer { 492 public: 493 MemorySanitizer(Module &M, MemorySanitizerOptions Options) 494 : CompileKernel(Options.Kernel), TrackOrigins(Options.TrackOrigins), 495 Recover(Options.Recover) { 496 initializeModule(M); 497 } 498 499 // MSan cannot be moved or copied because of MapParams. 500 MemorySanitizer(MemorySanitizer &&) = delete; 501 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 502 MemorySanitizer(const MemorySanitizer &) = delete; 503 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 504 505 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 506 507 private: 508 friend struct MemorySanitizerVisitor; 509 friend struct VarArgAMD64Helper; 510 friend struct VarArgMIPS64Helper; 511 friend struct VarArgAArch64Helper; 512 friend struct VarArgPowerPC64Helper; 513 friend struct VarArgSystemZHelper; 514 515 void initializeModule(Module &M); 516 void initializeCallbacks(Module &M); 517 void createKernelApi(Module &M); 518 void createUserspaceApi(Module &M); 519 520 /// True if we're compiling the Linux kernel. 521 bool CompileKernel; 522 /// Track origins (allocation points) of uninitialized values. 523 int TrackOrigins; 524 bool Recover; 525 526 LLVMContext *C; 527 Type *IntptrTy; 528 Type *OriginTy; 529 530 // XxxTLS variables represent the per-thread state in MSan and per-task state 531 // in KMSAN. 532 // For the userspace these point to thread-local globals. In the kernel land 533 // they point to the members of a per-task struct obtained via a call to 534 // __msan_get_context_state(). 535 536 /// Thread-local shadow storage for function parameters. 537 Value *ParamTLS; 538 539 /// Thread-local origin storage for function parameters. 540 Value *ParamOriginTLS; 541 542 /// Thread-local shadow storage for function return value. 543 Value *RetvalTLS; 544 545 /// Thread-local origin storage for function return value. 546 Value *RetvalOriginTLS; 547 548 /// Thread-local shadow storage for in-register va_arg function 549 /// parameters (x86_64-specific). 550 Value *VAArgTLS; 551 552 /// Thread-local shadow storage for in-register va_arg function 553 /// parameters (x86_64-specific). 554 Value *VAArgOriginTLS; 555 556 /// Thread-local shadow storage for va_arg overflow area 557 /// (x86_64-specific). 558 Value *VAArgOverflowSizeTLS; 559 560 /// Are the instrumentation callbacks set up? 561 bool CallbacksInitialized = false; 562 563 /// The run-time callback to print a warning. 564 FunctionCallee WarningFn; 565 566 // These arrays are indexed by log2(AccessSize). 567 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 568 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 569 570 /// Run-time helper that generates a new origin value for a stack 571 /// allocation. 572 FunctionCallee MsanSetAllocaOrigin4Fn; 573 574 /// Run-time helper that poisons stack on function entry. 575 FunctionCallee MsanPoisonStackFn; 576 577 /// Run-time helper that records a store (or any event) of an 578 /// uninitialized value and returns an updated origin id encoding this info. 579 FunctionCallee MsanChainOriginFn; 580 581 /// Run-time helper that paints an origin over a region. 582 FunctionCallee MsanSetOriginFn; 583 584 /// MSan runtime replacements for memmove, memcpy and memset. 585 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 586 587 /// KMSAN callback for task-local function argument shadow. 588 StructType *MsanContextStateTy; 589 FunctionCallee MsanGetContextStateFn; 590 591 /// Functions for poisoning/unpoisoning local variables 592 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 593 594 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 595 /// pointers. 596 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 597 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 598 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 599 FunctionCallee MsanInstrumentAsmStoreFn; 600 601 /// Helper to choose between different MsanMetadataPtrXxx(). 602 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 603 604 /// Memory map parameters used in application-to-shadow calculation. 605 const MemoryMapParams *MapParams; 606 607 /// Custom memory map parameters used when -msan-shadow-base or 608 // -msan-origin-base is provided. 609 MemoryMapParams CustomMapParams; 610 611 MDNode *ColdCallWeights; 612 613 /// Branch weights for origin store. 614 MDNode *OriginStoreWeights; 615 }; 616 617 void insertModuleCtor(Module &M) { 618 getOrCreateSanitizerCtorAndInitFunctions( 619 M, kMsanModuleCtorName, kMsanInitName, 620 /*InitArgTypes=*/{}, 621 /*InitArgs=*/{}, 622 // This callback is invoked when the functions are created the first 623 // time. Hook them into the global ctors list in that case: 624 [&](Function *Ctor, FunctionCallee) { 625 if (!ClWithComdat) { 626 appendToGlobalCtors(M, Ctor, 0); 627 return; 628 } 629 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 630 Ctor->setComdat(MsanCtorComdat); 631 appendToGlobalCtors(M, Ctor, 0, Ctor); 632 }); 633 } 634 635 /// A legacy function pass for msan instrumentation. 636 /// 637 /// Instruments functions to detect uninitialized reads. 638 struct MemorySanitizerLegacyPass : public FunctionPass { 639 // Pass identification, replacement for typeid. 640 static char ID; 641 642 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 643 : FunctionPass(ID), Options(Options) { 644 initializeMemorySanitizerLegacyPassPass(*PassRegistry::getPassRegistry()); 645 } 646 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 647 648 void getAnalysisUsage(AnalysisUsage &AU) const override { 649 AU.addRequired<TargetLibraryInfoWrapperPass>(); 650 } 651 652 bool runOnFunction(Function &F) override { 653 return MSan->sanitizeFunction( 654 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F)); 655 } 656 bool doInitialization(Module &M) override; 657 658 Optional<MemorySanitizer> MSan; 659 MemorySanitizerOptions Options; 660 }; 661 662 template <class T> T getOptOrDefault(const cl::opt<T> &Opt, T Default) { 663 return (Opt.getNumOccurrences() > 0) ? Opt : Default; 664 } 665 666 } // end anonymous namespace 667 668 MemorySanitizerOptions::MemorySanitizerOptions(int TO, bool R, bool K) 669 : Kernel(getOptOrDefault(ClEnableKmsan, K)), 670 TrackOrigins(getOptOrDefault(ClTrackOrigins, Kernel ? 2 : TO)), 671 Recover(getOptOrDefault(ClKeepGoing, Kernel || R)) {} 672 673 PreservedAnalyses MemorySanitizerPass::run(Function &F, 674 FunctionAnalysisManager &FAM) { 675 MemorySanitizer Msan(*F.getParent(), Options); 676 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 677 return PreservedAnalyses::none(); 678 return PreservedAnalyses::all(); 679 } 680 681 PreservedAnalyses 682 ModuleMemorySanitizerPass::run(Module &M, ModuleAnalysisManager &AM) { 683 if (Options.Kernel) 684 return PreservedAnalyses::all(); 685 insertModuleCtor(M); 686 return PreservedAnalyses::none(); 687 } 688 689 void MemorySanitizerPass::printPipeline( 690 raw_ostream &OS, function_ref<StringRef(StringRef)> MapClassName2PassName) { 691 static_cast<PassInfoMixin<MemorySanitizerPass> *>(this)->printPipeline( 692 OS, MapClassName2PassName); 693 OS << "<"; 694 if (Options.Recover) 695 OS << "recover;"; 696 if (Options.Kernel) 697 OS << "kernel;"; 698 OS << "track-origins=" << Options.TrackOrigins; 699 OS << ">"; 700 } 701 702 char MemorySanitizerLegacyPass::ID = 0; 703 704 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 705 "MemorySanitizer: detects uninitialized reads.", false, 706 false) 707 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 708 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 709 "MemorySanitizer: detects uninitialized reads.", false, 710 false) 711 712 FunctionPass * 713 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 714 return new MemorySanitizerLegacyPass(Options); 715 } 716 717 /// Create a non-const global initialized with the given string. 718 /// 719 /// Creates a writable global for Str so that we can pass it to the 720 /// run-time lib. Runtime uses first 4 bytes of the string to store the 721 /// frame ID, so the string needs to be mutable. 722 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 723 StringRef Str) { 724 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 725 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 726 GlobalValue::PrivateLinkage, StrConst, ""); 727 } 728 729 /// Create KMSAN API callbacks. 730 void MemorySanitizer::createKernelApi(Module &M) { 731 IRBuilder<> IRB(*C); 732 733 // These will be initialized in insertKmsanPrologue(). 734 RetvalTLS = nullptr; 735 RetvalOriginTLS = nullptr; 736 ParamTLS = nullptr; 737 ParamOriginTLS = nullptr; 738 VAArgTLS = nullptr; 739 VAArgOriginTLS = nullptr; 740 VAArgOverflowSizeTLS = nullptr; 741 742 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 743 IRB.getInt32Ty()); 744 // Requests the per-task context state (kmsan_context_state*) from the 745 // runtime library. 746 MsanContextStateTy = StructType::get( 747 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 748 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 749 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 750 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 751 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 752 OriginTy); 753 MsanGetContextStateFn = M.getOrInsertFunction( 754 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 755 756 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 757 PointerType::get(IRB.getInt32Ty(), 0)); 758 759 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 760 std::string name_load = 761 "__msan_metadata_ptr_for_load_" + std::to_string(size); 762 std::string name_store = 763 "__msan_metadata_ptr_for_store_" + std::to_string(size); 764 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 765 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 766 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 767 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 768 } 769 770 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 771 "__msan_metadata_ptr_for_load_n", RetTy, 772 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 773 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 774 "__msan_metadata_ptr_for_store_n", RetTy, 775 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 776 777 // Functions for poisoning and unpoisoning memory. 778 MsanPoisonAllocaFn = 779 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 780 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 781 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 782 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 783 } 784 785 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 786 return M.getOrInsertGlobal(Name, Ty, [&] { 787 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 788 nullptr, Name, nullptr, 789 GlobalVariable::InitialExecTLSModel); 790 }); 791 } 792 793 /// Insert declarations for userspace-specific functions and globals. 794 void MemorySanitizer::createUserspaceApi(Module &M) { 795 IRBuilder<> IRB(*C); 796 797 // Create the callback. 798 // FIXME: this function should have "Cold" calling conv, 799 // which is not yet implemented. 800 StringRef WarningFnName = Recover ? "__msan_warning_with_origin" 801 : "__msan_warning_with_origin_noreturn"; 802 WarningFn = 803 M.getOrInsertFunction(WarningFnName, IRB.getVoidTy(), IRB.getInt32Ty()); 804 805 // Create the global TLS variables. 806 RetvalTLS = 807 getOrInsertGlobal(M, "__msan_retval_tls", 808 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 809 810 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 811 812 ParamTLS = 813 getOrInsertGlobal(M, "__msan_param_tls", 814 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 815 816 ParamOriginTLS = 817 getOrInsertGlobal(M, "__msan_param_origin_tls", 818 ArrayType::get(OriginTy, kParamTLSSize / 4)); 819 820 VAArgTLS = 821 getOrInsertGlobal(M, "__msan_va_arg_tls", 822 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 823 824 VAArgOriginTLS = 825 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 826 ArrayType::get(OriginTy, kParamTLSSize / 4)); 827 828 VAArgOverflowSizeTLS = 829 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 830 831 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 832 AccessSizeIndex++) { 833 unsigned AccessSize = 1 << AccessSizeIndex; 834 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 835 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeWarningFnAttrs; 836 MaybeWarningFnAttrs.push_back(std::make_pair( 837 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 838 MaybeWarningFnAttrs.push_back(std::make_pair( 839 AttributeList::FirstArgIndex + 1, Attribute::get(*C, Attribute::ZExt))); 840 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 841 FunctionName, AttributeList::get(*C, MaybeWarningFnAttrs), 842 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt32Ty()); 843 844 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 845 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeStoreOriginFnAttrs; 846 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 847 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 848 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 849 AttributeList::FirstArgIndex + 2, Attribute::get(*C, Attribute::ZExt))); 850 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 851 FunctionName, AttributeList::get(*C, MaybeStoreOriginFnAttrs), 852 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt8PtrTy(), 853 IRB.getInt32Ty()); 854 } 855 856 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 857 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 858 IRB.getInt8PtrTy(), IntptrTy); 859 MsanPoisonStackFn = 860 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 861 IRB.getInt8PtrTy(), IntptrTy); 862 } 863 864 /// Insert extern declaration of runtime-provided functions and globals. 865 void MemorySanitizer::initializeCallbacks(Module &M) { 866 // Only do this once. 867 if (CallbacksInitialized) 868 return; 869 870 IRBuilder<> IRB(*C); 871 // Initialize callbacks that are common for kernel and userspace 872 // instrumentation. 873 MsanChainOriginFn = M.getOrInsertFunction( 874 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 875 MsanSetOriginFn = 876 M.getOrInsertFunction("__msan_set_origin", IRB.getVoidTy(), 877 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt32Ty()); 878 MemmoveFn = M.getOrInsertFunction( 879 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 880 IRB.getInt8PtrTy(), IntptrTy); 881 MemcpyFn = M.getOrInsertFunction( 882 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 883 IntptrTy); 884 MemsetFn = M.getOrInsertFunction( 885 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 886 IntptrTy); 887 888 MsanInstrumentAsmStoreFn = 889 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 890 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 891 892 if (CompileKernel) { 893 createKernelApi(M); 894 } else { 895 createUserspaceApi(M); 896 } 897 CallbacksInitialized = true; 898 } 899 900 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 901 int size) { 902 FunctionCallee *Fns = 903 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 904 switch (size) { 905 case 1: 906 return Fns[0]; 907 case 2: 908 return Fns[1]; 909 case 4: 910 return Fns[2]; 911 case 8: 912 return Fns[3]; 913 default: 914 return nullptr; 915 } 916 } 917 918 /// Module-level initialization. 919 /// 920 /// inserts a call to __msan_init to the module's constructor list. 921 void MemorySanitizer::initializeModule(Module &M) { 922 auto &DL = M.getDataLayout(); 923 924 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 925 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 926 // Check the overrides first 927 if (ShadowPassed || OriginPassed) { 928 CustomMapParams.AndMask = ClAndMask; 929 CustomMapParams.XorMask = ClXorMask; 930 CustomMapParams.ShadowBase = ClShadowBase; 931 CustomMapParams.OriginBase = ClOriginBase; 932 MapParams = &CustomMapParams; 933 } else { 934 Triple TargetTriple(M.getTargetTriple()); 935 switch (TargetTriple.getOS()) { 936 case Triple::FreeBSD: 937 switch (TargetTriple.getArch()) { 938 case Triple::x86_64: 939 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 940 break; 941 case Triple::x86: 942 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 943 break; 944 default: 945 report_fatal_error("unsupported architecture"); 946 } 947 break; 948 case Triple::NetBSD: 949 switch (TargetTriple.getArch()) { 950 case Triple::x86_64: 951 MapParams = NetBSD_X86_MemoryMapParams.bits64; 952 break; 953 default: 954 report_fatal_error("unsupported architecture"); 955 } 956 break; 957 case Triple::Linux: 958 switch (TargetTriple.getArch()) { 959 case Triple::x86_64: 960 MapParams = Linux_X86_MemoryMapParams.bits64; 961 break; 962 case Triple::x86: 963 MapParams = Linux_X86_MemoryMapParams.bits32; 964 break; 965 case Triple::mips64: 966 case Triple::mips64el: 967 MapParams = Linux_MIPS_MemoryMapParams.bits64; 968 break; 969 case Triple::ppc64: 970 case Triple::ppc64le: 971 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 972 break; 973 case Triple::systemz: 974 MapParams = Linux_S390_MemoryMapParams.bits64; 975 break; 976 case Triple::aarch64: 977 case Triple::aarch64_be: 978 MapParams = Linux_ARM_MemoryMapParams.bits64; 979 break; 980 default: 981 report_fatal_error("unsupported architecture"); 982 } 983 break; 984 default: 985 report_fatal_error("unsupported operating system"); 986 } 987 } 988 989 C = &(M.getContext()); 990 IRBuilder<> IRB(*C); 991 IntptrTy = IRB.getIntPtrTy(DL); 992 OriginTy = IRB.getInt32Ty(); 993 994 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 995 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 996 997 if (!CompileKernel) { 998 if (TrackOrigins) 999 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 1000 return new GlobalVariable( 1001 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 1002 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 1003 }); 1004 1005 if (Recover) 1006 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 1007 return new GlobalVariable(M, IRB.getInt32Ty(), true, 1008 GlobalValue::WeakODRLinkage, 1009 IRB.getInt32(Recover), "__msan_keep_going"); 1010 }); 1011 } 1012 } 1013 1014 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 1015 if (!Options.Kernel) 1016 insertModuleCtor(M); 1017 MSan.emplace(M, Options); 1018 return true; 1019 } 1020 1021 namespace { 1022 1023 /// A helper class that handles instrumentation of VarArg 1024 /// functions on a particular platform. 1025 /// 1026 /// Implementations are expected to insert the instrumentation 1027 /// necessary to propagate argument shadow through VarArg function 1028 /// calls. Visit* methods are called during an InstVisitor pass over 1029 /// the function, and should avoid creating new basic blocks. A new 1030 /// instance of this class is created for each instrumented function. 1031 struct VarArgHelper { 1032 virtual ~VarArgHelper() = default; 1033 1034 /// Visit a CallBase. 1035 virtual void visitCallBase(CallBase &CB, IRBuilder<> &IRB) = 0; 1036 1037 /// Visit a va_start call. 1038 virtual void visitVAStartInst(VAStartInst &I) = 0; 1039 1040 /// Visit a va_copy call. 1041 virtual void visitVACopyInst(VACopyInst &I) = 0; 1042 1043 /// Finalize function instrumentation. 1044 /// 1045 /// This method is called after visiting all interesting (see above) 1046 /// instructions in a function. 1047 virtual void finalizeInstrumentation() = 0; 1048 }; 1049 1050 struct MemorySanitizerVisitor; 1051 1052 } // end anonymous namespace 1053 1054 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 1055 MemorySanitizerVisitor &Visitor); 1056 1057 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 1058 if (TypeSize <= 8) return 0; 1059 return Log2_32_Ceil((TypeSize + 7) / 8); 1060 } 1061 1062 namespace { 1063 1064 /// This class does all the work for a given function. Store and Load 1065 /// instructions store and load corresponding shadow and origin 1066 /// values. Most instructions propagate shadow from arguments to their 1067 /// return values. Certain instructions (most importantly, BranchInst) 1068 /// test their argument shadow and print reports (with a runtime call) if it's 1069 /// non-zero. 1070 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1071 Function &F; 1072 MemorySanitizer &MS; 1073 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1074 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1075 std::unique_ptr<VarArgHelper> VAHelper; 1076 const TargetLibraryInfo *TLI; 1077 Instruction *FnPrologueEnd; 1078 1079 // The following flags disable parts of MSan instrumentation based on 1080 // exclusion list contents and command-line options. 1081 bool InsertChecks; 1082 bool PropagateShadow; 1083 bool PoisonStack; 1084 bool PoisonUndef; 1085 1086 struct ShadowOriginAndInsertPoint { 1087 Value *Shadow; 1088 Value *Origin; 1089 Instruction *OrigIns; 1090 1091 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1092 : Shadow(S), Origin(O), OrigIns(I) {} 1093 }; 1094 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1095 bool InstrumentLifetimeStart = ClHandleLifetimeIntrinsics; 1096 SmallSet<AllocaInst *, 16> AllocaSet; 1097 SmallVector<std::pair<IntrinsicInst *, AllocaInst *>, 16> LifetimeStartList; 1098 SmallVector<StoreInst *, 16> StoreList; 1099 1100 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1101 const TargetLibraryInfo &TLI) 1102 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1103 bool SanitizeFunction = 1104 F.hasFnAttribute(Attribute::SanitizeMemory) && !ClDisableChecks; 1105 InsertChecks = SanitizeFunction; 1106 PropagateShadow = SanitizeFunction; 1107 PoisonStack = SanitizeFunction && ClPoisonStack; 1108 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1109 1110 // In the presence of unreachable blocks, we may see Phi nodes with 1111 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1112 // blocks, such nodes will not have any shadow value associated with them. 1113 // It's easier to remove unreachable blocks than deal with missing shadow. 1114 removeUnreachableBlocks(F); 1115 1116 MS.initializeCallbacks(*F.getParent()); 1117 FnPrologueEnd = IRBuilder<>(F.getEntryBlock().getFirstNonPHI()) 1118 .CreateIntrinsic(Intrinsic::donothing, {}, {}); 1119 1120 if (MS.CompileKernel) { 1121 IRBuilder<> IRB(FnPrologueEnd); 1122 insertKmsanPrologue(IRB); 1123 } 1124 1125 LLVM_DEBUG(if (!InsertChecks) dbgs() 1126 << "MemorySanitizer is not inserting checks into '" 1127 << F.getName() << "'\n"); 1128 } 1129 1130 bool isInPrologue(Instruction &I) { 1131 return I.getParent() == FnPrologueEnd->getParent() && 1132 (&I == FnPrologueEnd || I.comesBefore(FnPrologueEnd)); 1133 } 1134 1135 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1136 if (MS.TrackOrigins <= 1) return V; 1137 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1138 } 1139 1140 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1141 const DataLayout &DL = F.getParent()->getDataLayout(); 1142 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1143 if (IntptrSize == kOriginSize) return Origin; 1144 assert(IntptrSize == kOriginSize * 2); 1145 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1146 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1147 } 1148 1149 /// Fill memory range with the given origin value. 1150 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1151 unsigned Size, Align Alignment) { 1152 const DataLayout &DL = F.getParent()->getDataLayout(); 1153 const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy); 1154 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1155 assert(IntptrAlignment >= kMinOriginAlignment); 1156 assert(IntptrSize >= kOriginSize); 1157 1158 unsigned Ofs = 0; 1159 Align CurrentAlignment = Alignment; 1160 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1161 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1162 Value *IntptrOriginPtr = 1163 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1164 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1165 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1166 : IntptrOriginPtr; 1167 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1168 Ofs += IntptrSize / kOriginSize; 1169 CurrentAlignment = IntptrAlignment; 1170 } 1171 } 1172 1173 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1174 Value *GEP = 1175 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1176 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1177 CurrentAlignment = kMinOriginAlignment; 1178 } 1179 } 1180 1181 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1182 Value *OriginPtr, Align Alignment, bool AsCall) { 1183 const DataLayout &DL = F.getParent()->getDataLayout(); 1184 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1185 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1186 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1187 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1188 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1189 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1190 OriginAlignment); 1191 return; 1192 } 1193 1194 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1195 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1196 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1197 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1198 Value *ConvertedShadow2 = 1199 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1200 CallBase *CB = IRB.CreateCall( 1201 Fn, {ConvertedShadow2, 1202 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), Origin}); 1203 CB->addParamAttr(0, Attribute::ZExt); 1204 CB->addParamAttr(2, Attribute::ZExt); 1205 } else { 1206 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1207 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1208 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1209 IRBuilder<> IRBNew(CheckTerm); 1210 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1211 OriginAlignment); 1212 } 1213 } 1214 1215 void materializeStores(bool InstrumentWithCalls) { 1216 for (StoreInst *SI : StoreList) { 1217 IRBuilder<> IRB(SI); 1218 Value *Val = SI->getValueOperand(); 1219 Value *Addr = SI->getPointerOperand(); 1220 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1221 Value *ShadowPtr, *OriginPtr; 1222 Type *ShadowTy = Shadow->getType(); 1223 const Align Alignment = SI->getAlign(); 1224 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1225 std::tie(ShadowPtr, OriginPtr) = 1226 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1227 1228 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1229 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1230 (void)NewSI; 1231 1232 if (SI->isAtomic()) 1233 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1234 1235 if (MS.TrackOrigins && !SI->isAtomic()) 1236 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1237 OriginAlignment, InstrumentWithCalls); 1238 } 1239 } 1240 1241 /// Helper function to insert a warning at IRB's current insert point. 1242 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1243 if (!Origin) 1244 Origin = (Value *)IRB.getInt32(0); 1245 assert(Origin->getType()->isIntegerTy()); 1246 IRB.CreateCall(MS.WarningFn, Origin)->setCannotMerge(); 1247 // FIXME: Insert UnreachableInst if !MS.Recover? 1248 // This may invalidate some of the following checks and needs to be done 1249 // at the very end. 1250 } 1251 1252 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1253 bool AsCall) { 1254 IRBuilder<> IRB(OrigIns); 1255 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1256 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1257 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1258 1259 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1260 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1261 insertWarningFn(IRB, Origin); 1262 } 1263 return; 1264 } 1265 1266 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1267 1268 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1269 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1270 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1271 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1272 Value *ConvertedShadow2 = 1273 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1274 CallBase *CB = IRB.CreateCall( 1275 Fn, {ConvertedShadow2, 1276 MS.TrackOrigins && Origin ? Origin : (Value *)IRB.getInt32(0)}); 1277 CB->addParamAttr(0, Attribute::ZExt); 1278 CB->addParamAttr(1, Attribute::ZExt); 1279 } else { 1280 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1281 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1282 Cmp, OrigIns, 1283 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1284 1285 IRB.SetInsertPoint(CheckTerm); 1286 insertWarningFn(IRB, Origin); 1287 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1288 } 1289 } 1290 1291 void materializeChecks(bool InstrumentWithCalls) { 1292 for (const auto &ShadowData : InstrumentationList) { 1293 Instruction *OrigIns = ShadowData.OrigIns; 1294 Value *Shadow = ShadowData.Shadow; 1295 Value *Origin = ShadowData.Origin; 1296 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1297 } 1298 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1299 } 1300 1301 // Returns the last instruction in the new prologue 1302 void insertKmsanPrologue(IRBuilder<> &IRB) { 1303 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1304 Constant *Zero = IRB.getInt32(0); 1305 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1306 {Zero, IRB.getInt32(0)}, "param_shadow"); 1307 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1308 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1309 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1310 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1311 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1312 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1313 MS.VAArgOverflowSizeTLS = 1314 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1315 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1316 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1317 {Zero, IRB.getInt32(5)}, "param_origin"); 1318 MS.RetvalOriginTLS = 1319 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1320 {Zero, IRB.getInt32(6)}, "retval_origin"); 1321 } 1322 1323 /// Add MemorySanitizer instrumentation to a function. 1324 bool runOnFunction() { 1325 // Iterate all BBs in depth-first order and create shadow instructions 1326 // for all instructions (where applicable). 1327 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1328 for (BasicBlock *BB : depth_first(FnPrologueEnd->getParent())) 1329 visit(*BB); 1330 1331 // Finalize PHI nodes. 1332 for (PHINode *PN : ShadowPHINodes) { 1333 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1334 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1335 size_t NumValues = PN->getNumIncomingValues(); 1336 for (size_t v = 0; v < NumValues; v++) { 1337 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1338 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1339 } 1340 } 1341 1342 VAHelper->finalizeInstrumentation(); 1343 1344 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to 1345 // instrumenting only allocas. 1346 if (InstrumentLifetimeStart) { 1347 for (auto Item : LifetimeStartList) { 1348 instrumentAlloca(*Item.second, Item.first); 1349 AllocaSet.erase(Item.second); 1350 } 1351 } 1352 // Poison the allocas for which we didn't instrument the corresponding 1353 // lifetime intrinsics. 1354 for (AllocaInst *AI : AllocaSet) 1355 instrumentAlloca(*AI); 1356 1357 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1358 InstrumentationList.size() + StoreList.size() > 1359 (unsigned)ClInstrumentationWithCallThreshold; 1360 1361 // Insert shadow value checks. 1362 materializeChecks(InstrumentWithCalls); 1363 1364 // Delayed instrumentation of StoreInst. 1365 // This may not add new address checks. 1366 materializeStores(InstrumentWithCalls); 1367 1368 return true; 1369 } 1370 1371 /// Compute the shadow type that corresponds to a given Value. 1372 Type *getShadowTy(Value *V) { 1373 return getShadowTy(V->getType()); 1374 } 1375 1376 /// Compute the shadow type that corresponds to a given Type. 1377 Type *getShadowTy(Type *OrigTy) { 1378 if (!OrigTy->isSized()) { 1379 return nullptr; 1380 } 1381 // For integer type, shadow is the same as the original type. 1382 // This may return weird-sized types like i1. 1383 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1384 return IT; 1385 const DataLayout &DL = F.getParent()->getDataLayout(); 1386 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1387 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1388 return FixedVectorType::get(IntegerType::get(*MS.C, EltSize), 1389 cast<FixedVectorType>(VT)->getNumElements()); 1390 } 1391 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1392 return ArrayType::get(getShadowTy(AT->getElementType()), 1393 AT->getNumElements()); 1394 } 1395 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1396 SmallVector<Type*, 4> Elements; 1397 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1398 Elements.push_back(getShadowTy(ST->getElementType(i))); 1399 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1400 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1401 return Res; 1402 } 1403 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1404 return IntegerType::get(*MS.C, TypeSize); 1405 } 1406 1407 /// Flatten a vector type. 1408 Type *getShadowTyNoVec(Type *ty) { 1409 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1410 return IntegerType::get(*MS.C, 1411 vt->getPrimitiveSizeInBits().getFixedSize()); 1412 return ty; 1413 } 1414 1415 /// Extract combined shadow of struct elements as a bool 1416 Value *collapseStructShadow(StructType *Struct, Value *Shadow, 1417 IRBuilder<> &IRB) { 1418 Value *FalseVal = IRB.getIntN(/* width */ 1, /* value */ 0); 1419 Value *Aggregator = FalseVal; 1420 1421 for (unsigned Idx = 0; Idx < Struct->getNumElements(); Idx++) { 1422 // Combine by ORing together each element's bool shadow 1423 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1424 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1425 Value *ShadowBool = convertToBool(ShadowInner, IRB); 1426 1427 if (Aggregator != FalseVal) 1428 Aggregator = IRB.CreateOr(Aggregator, ShadowBool); 1429 else 1430 Aggregator = ShadowBool; 1431 } 1432 1433 return Aggregator; 1434 } 1435 1436 // Extract combined shadow of array elements 1437 Value *collapseArrayShadow(ArrayType *Array, Value *Shadow, 1438 IRBuilder<> &IRB) { 1439 if (!Array->getNumElements()) 1440 return IRB.getIntN(/* width */ 1, /* value */ 0); 1441 1442 Value *FirstItem = IRB.CreateExtractValue(Shadow, 0); 1443 Value *Aggregator = convertShadowToScalar(FirstItem, IRB); 1444 1445 for (unsigned Idx = 1; Idx < Array->getNumElements(); Idx++) { 1446 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1447 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1448 Aggregator = IRB.CreateOr(Aggregator, ShadowInner); 1449 } 1450 return Aggregator; 1451 } 1452 1453 /// Convert a shadow value to it's flattened variant. The resulting 1454 /// shadow may not necessarily have the same bit width as the input 1455 /// value, but it will always be comparable to zero. 1456 Value *convertShadowToScalar(Value *V, IRBuilder<> &IRB) { 1457 if (StructType *Struct = dyn_cast<StructType>(V->getType())) 1458 return collapseStructShadow(Struct, V, IRB); 1459 if (ArrayType *Array = dyn_cast<ArrayType>(V->getType())) 1460 return collapseArrayShadow(Array, V, IRB); 1461 Type *Ty = V->getType(); 1462 Type *NoVecTy = getShadowTyNoVec(Ty); 1463 if (Ty == NoVecTy) return V; 1464 return IRB.CreateBitCast(V, NoVecTy); 1465 } 1466 1467 // Convert a scalar value to an i1 by comparing with 0 1468 Value *convertToBool(Value *V, IRBuilder<> &IRB, const Twine &name = "") { 1469 Type *VTy = V->getType(); 1470 assert(VTy->isIntegerTy()); 1471 if (VTy->getIntegerBitWidth() == 1) 1472 // Just converting a bool to a bool, so do nothing. 1473 return V; 1474 return IRB.CreateICmpNE(V, ConstantInt::get(VTy, 0), name); 1475 } 1476 1477 /// Compute the integer shadow offset that corresponds to a given 1478 /// application address. 1479 /// 1480 /// Offset = (Addr & ~AndMask) ^ XorMask 1481 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1482 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1483 1484 uint64_t AndMask = MS.MapParams->AndMask; 1485 if (AndMask) 1486 OffsetLong = 1487 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1488 1489 uint64_t XorMask = MS.MapParams->XorMask; 1490 if (XorMask) 1491 OffsetLong = 1492 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1493 return OffsetLong; 1494 } 1495 1496 /// Compute the shadow and origin addresses corresponding to a given 1497 /// application address. 1498 /// 1499 /// Shadow = ShadowBase + Offset 1500 /// Origin = (OriginBase + Offset) & ~3ULL 1501 std::pair<Value *, Value *> 1502 getShadowOriginPtrUserspace(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1503 MaybeAlign Alignment) { 1504 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1505 Value *ShadowLong = ShadowOffset; 1506 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1507 if (ShadowBase != 0) { 1508 ShadowLong = 1509 IRB.CreateAdd(ShadowLong, 1510 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1511 } 1512 Value *ShadowPtr = 1513 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1514 Value *OriginPtr = nullptr; 1515 if (MS.TrackOrigins) { 1516 Value *OriginLong = ShadowOffset; 1517 uint64_t OriginBase = MS.MapParams->OriginBase; 1518 if (OriginBase != 0) 1519 OriginLong = IRB.CreateAdd(OriginLong, 1520 ConstantInt::get(MS.IntptrTy, OriginBase)); 1521 if (!Alignment || *Alignment < kMinOriginAlignment) { 1522 uint64_t Mask = kMinOriginAlignment.value() - 1; 1523 OriginLong = 1524 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1525 } 1526 OriginPtr = 1527 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1528 } 1529 return std::make_pair(ShadowPtr, OriginPtr); 1530 } 1531 1532 std::pair<Value *, Value *> getShadowOriginPtrKernel(Value *Addr, 1533 IRBuilder<> &IRB, 1534 Type *ShadowTy, 1535 bool isStore) { 1536 Value *ShadowOriginPtrs; 1537 const DataLayout &DL = F.getParent()->getDataLayout(); 1538 int Size = DL.getTypeStoreSize(ShadowTy); 1539 1540 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1541 Value *AddrCast = 1542 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1543 if (Getter) { 1544 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1545 } else { 1546 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1547 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1548 : MS.MsanMetadataPtrForLoadN, 1549 {AddrCast, SizeVal}); 1550 } 1551 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1552 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1553 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1554 1555 return std::make_pair(ShadowPtr, OriginPtr); 1556 } 1557 1558 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1559 Type *ShadowTy, 1560 MaybeAlign Alignment, 1561 bool isStore) { 1562 if (MS.CompileKernel) 1563 return getShadowOriginPtrKernel(Addr, IRB, ShadowTy, isStore); 1564 return getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1565 } 1566 1567 /// Compute the shadow address for a given function argument. 1568 /// 1569 /// Shadow = ParamTLS+ArgOffset. 1570 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1571 int ArgOffset) { 1572 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1573 if (ArgOffset) 1574 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1575 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1576 "_msarg"); 1577 } 1578 1579 /// Compute the origin address for a given function argument. 1580 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1581 int ArgOffset) { 1582 if (!MS.TrackOrigins) 1583 return nullptr; 1584 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1585 if (ArgOffset) 1586 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1587 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1588 "_msarg_o"); 1589 } 1590 1591 /// Compute the shadow address for a retval. 1592 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1593 return IRB.CreatePointerCast(MS.RetvalTLS, 1594 PointerType::get(getShadowTy(A), 0), 1595 "_msret"); 1596 } 1597 1598 /// Compute the origin address for a retval. 1599 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1600 // We keep a single origin for the entire retval. Might be too optimistic. 1601 return MS.RetvalOriginTLS; 1602 } 1603 1604 /// Set SV to be the shadow value for V. 1605 void setShadow(Value *V, Value *SV) { 1606 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1607 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1608 } 1609 1610 /// Set Origin to be the origin value for V. 1611 void setOrigin(Value *V, Value *Origin) { 1612 if (!MS.TrackOrigins) return; 1613 assert(!OriginMap.count(V) && "Values may only have one origin"); 1614 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1615 OriginMap[V] = Origin; 1616 } 1617 1618 Constant *getCleanShadow(Type *OrigTy) { 1619 Type *ShadowTy = getShadowTy(OrigTy); 1620 if (!ShadowTy) 1621 return nullptr; 1622 return Constant::getNullValue(ShadowTy); 1623 } 1624 1625 /// Create a clean shadow value for a given value. 1626 /// 1627 /// Clean shadow (all zeroes) means all bits of the value are defined 1628 /// (initialized). 1629 Constant *getCleanShadow(Value *V) { 1630 return getCleanShadow(V->getType()); 1631 } 1632 1633 /// Create a dirty shadow of a given shadow type. 1634 Constant *getPoisonedShadow(Type *ShadowTy) { 1635 assert(ShadowTy); 1636 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1637 return Constant::getAllOnesValue(ShadowTy); 1638 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1639 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1640 getPoisonedShadow(AT->getElementType())); 1641 return ConstantArray::get(AT, Vals); 1642 } 1643 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1644 SmallVector<Constant *, 4> Vals; 1645 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1646 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1647 return ConstantStruct::get(ST, Vals); 1648 } 1649 llvm_unreachable("Unexpected shadow type"); 1650 } 1651 1652 /// Create a dirty shadow for a given value. 1653 Constant *getPoisonedShadow(Value *V) { 1654 Type *ShadowTy = getShadowTy(V); 1655 if (!ShadowTy) 1656 return nullptr; 1657 return getPoisonedShadow(ShadowTy); 1658 } 1659 1660 /// Create a clean (zero) origin. 1661 Value *getCleanOrigin() { 1662 return Constant::getNullValue(MS.OriginTy); 1663 } 1664 1665 /// Get the shadow value for a given Value. 1666 /// 1667 /// This function either returns the value set earlier with setShadow, 1668 /// or extracts if from ParamTLS (for function arguments). 1669 Value *getShadow(Value *V) { 1670 if (!PropagateShadow) return getCleanShadow(V); 1671 if (Instruction *I = dyn_cast<Instruction>(V)) { 1672 if (I->getMetadata("nosanitize")) 1673 return getCleanShadow(V); 1674 // For instructions the shadow is already stored in the map. 1675 Value *Shadow = ShadowMap[V]; 1676 if (!Shadow) { 1677 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1678 (void)I; 1679 assert(Shadow && "No shadow for a value"); 1680 } 1681 return Shadow; 1682 } 1683 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1684 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1685 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1686 (void)U; 1687 return AllOnes; 1688 } 1689 if (Argument *A = dyn_cast<Argument>(V)) { 1690 // For arguments we compute the shadow on demand and store it in the map. 1691 Value **ShadowPtr = &ShadowMap[V]; 1692 if (*ShadowPtr) 1693 return *ShadowPtr; 1694 Function *F = A->getParent(); 1695 IRBuilder<> EntryIRB(FnPrologueEnd); 1696 unsigned ArgOffset = 0; 1697 const DataLayout &DL = F->getParent()->getDataLayout(); 1698 for (auto &FArg : F->args()) { 1699 if (!FArg.getType()->isSized()) { 1700 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1701 continue; 1702 } 1703 1704 bool FArgByVal = FArg.hasByValAttr(); 1705 bool FArgNoUndef = FArg.hasAttribute(Attribute::NoUndef); 1706 bool FArgEagerCheck = ClEagerChecks && !FArgByVal && FArgNoUndef; 1707 unsigned Size = 1708 FArg.hasByValAttr() 1709 ? DL.getTypeAllocSize(FArg.getParamByValType()) 1710 : DL.getTypeAllocSize(FArg.getType()); 1711 1712 if (A == &FArg) { 1713 bool Overflow = ArgOffset + Size > kParamTLSSize; 1714 if (FArgEagerCheck) { 1715 *ShadowPtr = getCleanShadow(V); 1716 setOrigin(A, getCleanOrigin()); 1717 break; 1718 } else if (FArgByVal) { 1719 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1720 // ByVal pointer itself has clean shadow. We copy the actual 1721 // argument shadow to the underlying memory. 1722 // Figure out maximal valid memcpy alignment. 1723 const Align ArgAlign = DL.getValueOrABITypeAlignment( 1724 MaybeAlign(FArg.getParamAlignment()), FArg.getParamByValType()); 1725 Value *CpShadowPtr = 1726 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1727 /*isStore*/ true) 1728 .first; 1729 // TODO(glider): need to copy origins. 1730 if (Overflow) { 1731 // ParamTLS overflow. 1732 EntryIRB.CreateMemSet( 1733 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1734 Size, ArgAlign); 1735 } else { 1736 const Align CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1737 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1738 CopyAlign, Size); 1739 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1740 (void)Cpy; 1741 } 1742 *ShadowPtr = getCleanShadow(V); 1743 } else { 1744 // Shadow over TLS 1745 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1746 if (Overflow) { 1747 // ParamTLS overflow. 1748 *ShadowPtr = getCleanShadow(V); 1749 } else { 1750 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1751 kShadowTLSAlignment); 1752 } 1753 } 1754 LLVM_DEBUG(dbgs() 1755 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1756 if (MS.TrackOrigins && !Overflow) { 1757 Value *OriginPtr = 1758 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1759 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1760 } else { 1761 setOrigin(A, getCleanOrigin()); 1762 } 1763 1764 break; 1765 } 1766 1767 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1768 } 1769 assert(*ShadowPtr && "Could not find shadow for an argument"); 1770 return *ShadowPtr; 1771 } 1772 // For everything else the shadow is zero. 1773 return getCleanShadow(V); 1774 } 1775 1776 /// Get the shadow for i-th argument of the instruction I. 1777 Value *getShadow(Instruction *I, int i) { 1778 return getShadow(I->getOperand(i)); 1779 } 1780 1781 /// Get the origin for a value. 1782 Value *getOrigin(Value *V) { 1783 if (!MS.TrackOrigins) return nullptr; 1784 if (!PropagateShadow) return getCleanOrigin(); 1785 if (isa<Constant>(V)) return getCleanOrigin(); 1786 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1787 "Unexpected value type in getOrigin()"); 1788 if (Instruction *I = dyn_cast<Instruction>(V)) { 1789 if (I->getMetadata("nosanitize")) 1790 return getCleanOrigin(); 1791 } 1792 Value *Origin = OriginMap[V]; 1793 assert(Origin && "Missing origin"); 1794 return Origin; 1795 } 1796 1797 /// Get the origin for i-th argument of the instruction I. 1798 Value *getOrigin(Instruction *I, int i) { 1799 return getOrigin(I->getOperand(i)); 1800 } 1801 1802 /// Remember the place where a shadow check should be inserted. 1803 /// 1804 /// This location will be later instrumented with a check that will print a 1805 /// UMR warning in runtime if the shadow value is not 0. 1806 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1807 assert(Shadow); 1808 if (!InsertChecks) return; 1809 #ifndef NDEBUG 1810 Type *ShadowTy = Shadow->getType(); 1811 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy) || 1812 isa<StructType>(ShadowTy) || isa<ArrayType>(ShadowTy)) && 1813 "Can only insert checks for integer, vector, and aggregate shadow " 1814 "types"); 1815 #endif 1816 InstrumentationList.push_back( 1817 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1818 } 1819 1820 /// Remember the place where a shadow check should be inserted. 1821 /// 1822 /// This location will be later instrumented with a check that will print a 1823 /// UMR warning in runtime if the value is not fully defined. 1824 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1825 assert(Val); 1826 Value *Shadow, *Origin; 1827 if (ClCheckConstantShadow) { 1828 Shadow = getShadow(Val); 1829 if (!Shadow) return; 1830 Origin = getOrigin(Val); 1831 } else { 1832 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1833 if (!Shadow) return; 1834 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1835 } 1836 insertShadowCheck(Shadow, Origin, OrigIns); 1837 } 1838 1839 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1840 switch (a) { 1841 case AtomicOrdering::NotAtomic: 1842 return AtomicOrdering::NotAtomic; 1843 case AtomicOrdering::Unordered: 1844 case AtomicOrdering::Monotonic: 1845 case AtomicOrdering::Release: 1846 return AtomicOrdering::Release; 1847 case AtomicOrdering::Acquire: 1848 case AtomicOrdering::AcquireRelease: 1849 return AtomicOrdering::AcquireRelease; 1850 case AtomicOrdering::SequentiallyConsistent: 1851 return AtomicOrdering::SequentiallyConsistent; 1852 } 1853 llvm_unreachable("Unknown ordering"); 1854 } 1855 1856 Value *makeAddReleaseOrderingTable(IRBuilder<> &IRB) { 1857 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1858 uint32_t OrderingTable[NumOrderings] = {}; 1859 1860 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1861 OrderingTable[(int)AtomicOrderingCABI::release] = 1862 (int)AtomicOrderingCABI::release; 1863 OrderingTable[(int)AtomicOrderingCABI::consume] = 1864 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1865 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1866 (int)AtomicOrderingCABI::acq_rel; 1867 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1868 (int)AtomicOrderingCABI::seq_cst; 1869 1870 return ConstantDataVector::get(IRB.getContext(), 1871 makeArrayRef(OrderingTable, NumOrderings)); 1872 } 1873 1874 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1875 switch (a) { 1876 case AtomicOrdering::NotAtomic: 1877 return AtomicOrdering::NotAtomic; 1878 case AtomicOrdering::Unordered: 1879 case AtomicOrdering::Monotonic: 1880 case AtomicOrdering::Acquire: 1881 return AtomicOrdering::Acquire; 1882 case AtomicOrdering::Release: 1883 case AtomicOrdering::AcquireRelease: 1884 return AtomicOrdering::AcquireRelease; 1885 case AtomicOrdering::SequentiallyConsistent: 1886 return AtomicOrdering::SequentiallyConsistent; 1887 } 1888 llvm_unreachable("Unknown ordering"); 1889 } 1890 1891 Value *makeAddAcquireOrderingTable(IRBuilder<> &IRB) { 1892 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1893 uint32_t OrderingTable[NumOrderings] = {}; 1894 1895 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1896 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1897 OrderingTable[(int)AtomicOrderingCABI::consume] = 1898 (int)AtomicOrderingCABI::acquire; 1899 OrderingTable[(int)AtomicOrderingCABI::release] = 1900 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1901 (int)AtomicOrderingCABI::acq_rel; 1902 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1903 (int)AtomicOrderingCABI::seq_cst; 1904 1905 return ConstantDataVector::get(IRB.getContext(), 1906 makeArrayRef(OrderingTable, NumOrderings)); 1907 } 1908 1909 // ------------------- Visitors. 1910 using InstVisitor<MemorySanitizerVisitor>::visit; 1911 void visit(Instruction &I) { 1912 if (I.getMetadata("nosanitize")) 1913 return; 1914 // Don't want to visit if we're in the prologue 1915 if (isInPrologue(I)) 1916 return; 1917 InstVisitor<MemorySanitizerVisitor>::visit(I); 1918 } 1919 1920 /// Instrument LoadInst 1921 /// 1922 /// Loads the corresponding shadow and (optionally) origin. 1923 /// Optionally, checks that the load address is fully defined. 1924 void visitLoadInst(LoadInst &I) { 1925 assert(I.getType()->isSized() && "Load type must have size"); 1926 assert(!I.getMetadata("nosanitize")); 1927 IRBuilder<> IRB(I.getNextNode()); 1928 Type *ShadowTy = getShadowTy(&I); 1929 Value *Addr = I.getPointerOperand(); 1930 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 1931 const Align Alignment = assumeAligned(I.getAlignment()); 1932 if (PropagateShadow) { 1933 std::tie(ShadowPtr, OriginPtr) = 1934 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1935 setShadow(&I, 1936 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1937 } else { 1938 setShadow(&I, getCleanShadow(&I)); 1939 } 1940 1941 if (ClCheckAccessAddress) 1942 insertShadowCheck(I.getPointerOperand(), &I); 1943 1944 if (I.isAtomic()) 1945 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1946 1947 if (MS.TrackOrigins) { 1948 if (PropagateShadow) { 1949 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1950 setOrigin( 1951 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1952 } else { 1953 setOrigin(&I, getCleanOrigin()); 1954 } 1955 } 1956 } 1957 1958 /// Instrument StoreInst 1959 /// 1960 /// Stores the corresponding shadow and (optionally) origin. 1961 /// Optionally, checks that the store address is fully defined. 1962 void visitStoreInst(StoreInst &I) { 1963 StoreList.push_back(&I); 1964 if (ClCheckAccessAddress) 1965 insertShadowCheck(I.getPointerOperand(), &I); 1966 } 1967 1968 void handleCASOrRMW(Instruction &I) { 1969 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1970 1971 IRBuilder<> IRB(&I); 1972 Value *Addr = I.getOperand(0); 1973 Value *Val = I.getOperand(1); 1974 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, Val->getType(), Align(1), 1975 /*isStore*/ true) 1976 .first; 1977 1978 if (ClCheckAccessAddress) 1979 insertShadowCheck(Addr, &I); 1980 1981 // Only test the conditional argument of cmpxchg instruction. 1982 // The other argument can potentially be uninitialized, but we can not 1983 // detect this situation reliably without possible false positives. 1984 if (isa<AtomicCmpXchgInst>(I)) 1985 insertShadowCheck(Val, &I); 1986 1987 IRB.CreateStore(getCleanShadow(Val), ShadowPtr); 1988 1989 setShadow(&I, getCleanShadow(&I)); 1990 setOrigin(&I, getCleanOrigin()); 1991 } 1992 1993 void visitAtomicRMWInst(AtomicRMWInst &I) { 1994 handleCASOrRMW(I); 1995 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1996 } 1997 1998 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1999 handleCASOrRMW(I); 2000 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 2001 } 2002 2003 // Vector manipulation. 2004 void visitExtractElementInst(ExtractElementInst &I) { 2005 insertShadowCheck(I.getOperand(1), &I); 2006 IRBuilder<> IRB(&I); 2007 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 2008 "_msprop")); 2009 setOrigin(&I, getOrigin(&I, 0)); 2010 } 2011 2012 void visitInsertElementInst(InsertElementInst &I) { 2013 insertShadowCheck(I.getOperand(2), &I); 2014 IRBuilder<> IRB(&I); 2015 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 2016 I.getOperand(2), "_msprop")); 2017 setOriginForNaryOp(I); 2018 } 2019 2020 void visitShuffleVectorInst(ShuffleVectorInst &I) { 2021 IRBuilder<> IRB(&I); 2022 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 2023 I.getShuffleMask(), "_msprop")); 2024 setOriginForNaryOp(I); 2025 } 2026 2027 // Casts. 2028 void visitSExtInst(SExtInst &I) { 2029 IRBuilder<> IRB(&I); 2030 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 2031 setOrigin(&I, getOrigin(&I, 0)); 2032 } 2033 2034 void visitZExtInst(ZExtInst &I) { 2035 IRBuilder<> IRB(&I); 2036 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 2037 setOrigin(&I, getOrigin(&I, 0)); 2038 } 2039 2040 void visitTruncInst(TruncInst &I) { 2041 IRBuilder<> IRB(&I); 2042 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 2043 setOrigin(&I, getOrigin(&I, 0)); 2044 } 2045 2046 void visitBitCastInst(BitCastInst &I) { 2047 // Special case: if this is the bitcast (there is exactly 1 allowed) between 2048 // a musttail call and a ret, don't instrument. New instructions are not 2049 // allowed after a musttail call. 2050 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 2051 if (CI->isMustTailCall()) 2052 return; 2053 IRBuilder<> IRB(&I); 2054 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 2055 setOrigin(&I, getOrigin(&I, 0)); 2056 } 2057 2058 void visitPtrToIntInst(PtrToIntInst &I) { 2059 IRBuilder<> IRB(&I); 2060 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2061 "_msprop_ptrtoint")); 2062 setOrigin(&I, getOrigin(&I, 0)); 2063 } 2064 2065 void visitIntToPtrInst(IntToPtrInst &I) { 2066 IRBuilder<> IRB(&I); 2067 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2068 "_msprop_inttoptr")); 2069 setOrigin(&I, getOrigin(&I, 0)); 2070 } 2071 2072 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 2073 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 2074 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 2075 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 2076 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 2077 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 2078 2079 /// Propagate shadow for bitwise AND. 2080 /// 2081 /// This code is exact, i.e. if, for example, a bit in the left argument 2082 /// is defined and 0, then neither the value not definedness of the 2083 /// corresponding bit in B don't affect the resulting shadow. 2084 void visitAnd(BinaryOperator &I) { 2085 IRBuilder<> IRB(&I); 2086 // "And" of 0 and a poisoned value results in unpoisoned value. 2087 // 1&1 => 1; 0&1 => 0; p&1 => p; 2088 // 1&0 => 0; 0&0 => 0; p&0 => 0; 2089 // 1&p => p; 0&p => 0; p&p => p; 2090 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 2091 Value *S1 = getShadow(&I, 0); 2092 Value *S2 = getShadow(&I, 1); 2093 Value *V1 = I.getOperand(0); 2094 Value *V2 = I.getOperand(1); 2095 if (V1->getType() != S1->getType()) { 2096 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2097 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2098 } 2099 Value *S1S2 = IRB.CreateAnd(S1, S2); 2100 Value *V1S2 = IRB.CreateAnd(V1, S2); 2101 Value *S1V2 = IRB.CreateAnd(S1, V2); 2102 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2103 setOriginForNaryOp(I); 2104 } 2105 2106 void visitOr(BinaryOperator &I) { 2107 IRBuilder<> IRB(&I); 2108 // "Or" of 1 and a poisoned value results in unpoisoned value. 2109 // 1|1 => 1; 0|1 => 1; p|1 => 1; 2110 // 1|0 => 1; 0|0 => 0; p|0 => p; 2111 // 1|p => 1; 0|p => p; p|p => p; 2112 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 2113 Value *S1 = getShadow(&I, 0); 2114 Value *S2 = getShadow(&I, 1); 2115 Value *V1 = IRB.CreateNot(I.getOperand(0)); 2116 Value *V2 = IRB.CreateNot(I.getOperand(1)); 2117 if (V1->getType() != S1->getType()) { 2118 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2119 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2120 } 2121 Value *S1S2 = IRB.CreateAnd(S1, S2); 2122 Value *V1S2 = IRB.CreateAnd(V1, S2); 2123 Value *S1V2 = IRB.CreateAnd(S1, V2); 2124 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2125 setOriginForNaryOp(I); 2126 } 2127 2128 /// Default propagation of shadow and/or origin. 2129 /// 2130 /// This class implements the general case of shadow propagation, used in all 2131 /// cases where we don't know and/or don't care about what the operation 2132 /// actually does. It converts all input shadow values to a common type 2133 /// (extending or truncating as necessary), and bitwise OR's them. 2134 /// 2135 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 2136 /// fully initialized), and less prone to false positives. 2137 /// 2138 /// This class also implements the general case of origin propagation. For a 2139 /// Nary operation, result origin is set to the origin of an argument that is 2140 /// not entirely initialized. If there is more than one such arguments, the 2141 /// rightmost of them is picked. It does not matter which one is picked if all 2142 /// arguments are initialized. 2143 template <bool CombineShadow> 2144 class Combiner { 2145 Value *Shadow = nullptr; 2146 Value *Origin = nullptr; 2147 IRBuilder<> &IRB; 2148 MemorySanitizerVisitor *MSV; 2149 2150 public: 2151 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 2152 : IRB(IRB), MSV(MSV) {} 2153 2154 /// Add a pair of shadow and origin values to the mix. 2155 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 2156 if (CombineShadow) { 2157 assert(OpShadow); 2158 if (!Shadow) 2159 Shadow = OpShadow; 2160 else { 2161 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 2162 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 2163 } 2164 } 2165 2166 if (MSV->MS.TrackOrigins) { 2167 assert(OpOrigin); 2168 if (!Origin) { 2169 Origin = OpOrigin; 2170 } else { 2171 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 2172 // No point in adding something that might result in 0 origin value. 2173 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 2174 Value *FlatShadow = MSV->convertShadowToScalar(OpShadow, IRB); 2175 Value *Cond = 2176 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 2177 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 2178 } 2179 } 2180 } 2181 return *this; 2182 } 2183 2184 /// Add an application value to the mix. 2185 Combiner &Add(Value *V) { 2186 Value *OpShadow = MSV->getShadow(V); 2187 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2188 return Add(OpShadow, OpOrigin); 2189 } 2190 2191 /// Set the current combined values as the given instruction's shadow 2192 /// and origin. 2193 void Done(Instruction *I) { 2194 if (CombineShadow) { 2195 assert(Shadow); 2196 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2197 MSV->setShadow(I, Shadow); 2198 } 2199 if (MSV->MS.TrackOrigins) { 2200 assert(Origin); 2201 MSV->setOrigin(I, Origin); 2202 } 2203 } 2204 }; 2205 2206 using ShadowAndOriginCombiner = Combiner<true>; 2207 using OriginCombiner = Combiner<false>; 2208 2209 /// Propagate origin for arbitrary operation. 2210 void setOriginForNaryOp(Instruction &I) { 2211 if (!MS.TrackOrigins) return; 2212 IRBuilder<> IRB(&I); 2213 OriginCombiner OC(this, IRB); 2214 for (Use &Op : I.operands()) 2215 OC.Add(Op.get()); 2216 OC.Done(&I); 2217 } 2218 2219 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2220 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2221 "Vector of pointers is not a valid shadow type"); 2222 return Ty->isVectorTy() ? cast<FixedVectorType>(Ty)->getNumElements() * 2223 Ty->getScalarSizeInBits() 2224 : Ty->getPrimitiveSizeInBits(); 2225 } 2226 2227 /// Cast between two shadow types, extending or truncating as 2228 /// necessary. 2229 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2230 bool Signed = false) { 2231 Type *srcTy = V->getType(); 2232 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2233 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2234 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2235 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2236 2237 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2238 return IRB.CreateIntCast(V, dstTy, Signed); 2239 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2240 cast<FixedVectorType>(dstTy)->getNumElements() == 2241 cast<FixedVectorType>(srcTy)->getNumElements()) 2242 return IRB.CreateIntCast(V, dstTy, Signed); 2243 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2244 Value *V2 = 2245 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2246 return IRB.CreateBitCast(V2, dstTy); 2247 // TODO: handle struct types. 2248 } 2249 2250 /// Cast an application value to the type of its own shadow. 2251 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2252 Type *ShadowTy = getShadowTy(V); 2253 if (V->getType() == ShadowTy) 2254 return V; 2255 if (V->getType()->isPtrOrPtrVectorTy()) 2256 return IRB.CreatePtrToInt(V, ShadowTy); 2257 else 2258 return IRB.CreateBitCast(V, ShadowTy); 2259 } 2260 2261 /// Propagate shadow for arbitrary operation. 2262 void handleShadowOr(Instruction &I) { 2263 IRBuilder<> IRB(&I); 2264 ShadowAndOriginCombiner SC(this, IRB); 2265 for (Use &Op : I.operands()) 2266 SC.Add(Op.get()); 2267 SC.Done(&I); 2268 } 2269 2270 void visitFNeg(UnaryOperator &I) { handleShadowOr(I); } 2271 2272 // Handle multiplication by constant. 2273 // 2274 // Handle a special case of multiplication by constant that may have one or 2275 // more zeros in the lower bits. This makes corresponding number of lower bits 2276 // of the result zero as well. We model it by shifting the other operand 2277 // shadow left by the required number of bits. Effectively, we transform 2278 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2279 // We use multiplication by 2**N instead of shift to cover the case of 2280 // multiplication by 0, which may occur in some elements of a vector operand. 2281 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2282 Value *OtherArg) { 2283 Constant *ShadowMul; 2284 Type *Ty = ConstArg->getType(); 2285 if (auto *VTy = dyn_cast<VectorType>(Ty)) { 2286 unsigned NumElements = cast<FixedVectorType>(VTy)->getNumElements(); 2287 Type *EltTy = VTy->getElementType(); 2288 SmallVector<Constant *, 16> Elements; 2289 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2290 if (ConstantInt *Elt = 2291 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2292 const APInt &V = Elt->getValue(); 2293 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2294 Elements.push_back(ConstantInt::get(EltTy, V2)); 2295 } else { 2296 Elements.push_back(ConstantInt::get(EltTy, 1)); 2297 } 2298 } 2299 ShadowMul = ConstantVector::get(Elements); 2300 } else { 2301 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2302 const APInt &V = Elt->getValue(); 2303 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2304 ShadowMul = ConstantInt::get(Ty, V2); 2305 } else { 2306 ShadowMul = ConstantInt::get(Ty, 1); 2307 } 2308 } 2309 2310 IRBuilder<> IRB(&I); 2311 setShadow(&I, 2312 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2313 setOrigin(&I, getOrigin(OtherArg)); 2314 } 2315 2316 void visitMul(BinaryOperator &I) { 2317 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2318 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2319 if (constOp0 && !constOp1) 2320 handleMulByConstant(I, constOp0, I.getOperand(1)); 2321 else if (constOp1 && !constOp0) 2322 handleMulByConstant(I, constOp1, I.getOperand(0)); 2323 else 2324 handleShadowOr(I); 2325 } 2326 2327 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2328 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2329 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2330 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2331 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2332 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2333 2334 void handleIntegerDiv(Instruction &I) { 2335 IRBuilder<> IRB(&I); 2336 // Strict on the second argument. 2337 insertShadowCheck(I.getOperand(1), &I); 2338 setShadow(&I, getShadow(&I, 0)); 2339 setOrigin(&I, getOrigin(&I, 0)); 2340 } 2341 2342 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2343 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2344 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2345 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2346 2347 // Floating point division is side-effect free. We can not require that the 2348 // divisor is fully initialized and must propagate shadow. See PR37523. 2349 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2350 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2351 2352 /// Instrument == and != comparisons. 2353 /// 2354 /// Sometimes the comparison result is known even if some of the bits of the 2355 /// arguments are not. 2356 void handleEqualityComparison(ICmpInst &I) { 2357 IRBuilder<> IRB(&I); 2358 Value *A = I.getOperand(0); 2359 Value *B = I.getOperand(1); 2360 Value *Sa = getShadow(A); 2361 Value *Sb = getShadow(B); 2362 2363 // Get rid of pointers and vectors of pointers. 2364 // For ints (and vectors of ints), types of A and Sa match, 2365 // and this is a no-op. 2366 A = IRB.CreatePointerCast(A, Sa->getType()); 2367 B = IRB.CreatePointerCast(B, Sb->getType()); 2368 2369 // A == B <==> (C = A^B) == 0 2370 // A != B <==> (C = A^B) != 0 2371 // Sc = Sa | Sb 2372 Value *C = IRB.CreateXor(A, B); 2373 Value *Sc = IRB.CreateOr(Sa, Sb); 2374 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2375 // Result is defined if one of the following is true 2376 // * there is a defined 1 bit in C 2377 // * C is fully defined 2378 // Si = !(C & ~Sc) && Sc 2379 Value *Zero = Constant::getNullValue(Sc->getType()); 2380 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2381 Value *Si = 2382 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2383 IRB.CreateICmpEQ( 2384 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2385 Si->setName("_msprop_icmp"); 2386 setShadow(&I, Si); 2387 setOriginForNaryOp(I); 2388 } 2389 2390 /// Build the lowest possible value of V, taking into account V's 2391 /// uninitialized bits. 2392 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2393 bool isSigned) { 2394 if (isSigned) { 2395 // Split shadow into sign bit and other bits. 2396 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2397 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2398 // Maximise the undefined shadow bit, minimize other undefined bits. 2399 return 2400 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2401 } else { 2402 // Minimize undefined bits. 2403 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2404 } 2405 } 2406 2407 /// Build the highest possible value of V, taking into account V's 2408 /// uninitialized bits. 2409 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2410 bool isSigned) { 2411 if (isSigned) { 2412 // Split shadow into sign bit and other bits. 2413 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2414 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2415 // Minimise the undefined shadow bit, maximise other undefined bits. 2416 return 2417 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2418 } else { 2419 // Maximize undefined bits. 2420 return IRB.CreateOr(A, Sa); 2421 } 2422 } 2423 2424 /// Instrument relational comparisons. 2425 /// 2426 /// This function does exact shadow propagation for all relational 2427 /// comparisons of integers, pointers and vectors of those. 2428 /// FIXME: output seems suboptimal when one of the operands is a constant 2429 void handleRelationalComparisonExact(ICmpInst &I) { 2430 IRBuilder<> IRB(&I); 2431 Value *A = I.getOperand(0); 2432 Value *B = I.getOperand(1); 2433 Value *Sa = getShadow(A); 2434 Value *Sb = getShadow(B); 2435 2436 // Get rid of pointers and vectors of pointers. 2437 // For ints (and vectors of ints), types of A and Sa match, 2438 // and this is a no-op. 2439 A = IRB.CreatePointerCast(A, Sa->getType()); 2440 B = IRB.CreatePointerCast(B, Sb->getType()); 2441 2442 // Let [a0, a1] be the interval of possible values of A, taking into account 2443 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2444 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2445 bool IsSigned = I.isSigned(); 2446 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2447 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2448 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2449 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2450 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2451 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2452 Value *Si = IRB.CreateXor(S1, S2); 2453 setShadow(&I, Si); 2454 setOriginForNaryOp(I); 2455 } 2456 2457 /// Instrument signed relational comparisons. 2458 /// 2459 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2460 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2461 void handleSignedRelationalComparison(ICmpInst &I) { 2462 Constant *constOp; 2463 Value *op = nullptr; 2464 CmpInst::Predicate pre; 2465 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2466 op = I.getOperand(0); 2467 pre = I.getPredicate(); 2468 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2469 op = I.getOperand(1); 2470 pre = I.getSwappedPredicate(); 2471 } else { 2472 handleShadowOr(I); 2473 return; 2474 } 2475 2476 if ((constOp->isNullValue() && 2477 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2478 (constOp->isAllOnesValue() && 2479 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2480 IRBuilder<> IRB(&I); 2481 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2482 "_msprop_icmp_s"); 2483 setShadow(&I, Shadow); 2484 setOrigin(&I, getOrigin(op)); 2485 } else { 2486 handleShadowOr(I); 2487 } 2488 } 2489 2490 void visitICmpInst(ICmpInst &I) { 2491 if (!ClHandleICmp) { 2492 handleShadowOr(I); 2493 return; 2494 } 2495 if (I.isEquality()) { 2496 handleEqualityComparison(I); 2497 return; 2498 } 2499 2500 assert(I.isRelational()); 2501 if (ClHandleICmpExact) { 2502 handleRelationalComparisonExact(I); 2503 return; 2504 } 2505 if (I.isSigned()) { 2506 handleSignedRelationalComparison(I); 2507 return; 2508 } 2509 2510 assert(I.isUnsigned()); 2511 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2512 handleRelationalComparisonExact(I); 2513 return; 2514 } 2515 2516 handleShadowOr(I); 2517 } 2518 2519 void visitFCmpInst(FCmpInst &I) { 2520 handleShadowOr(I); 2521 } 2522 2523 void handleShift(BinaryOperator &I) { 2524 IRBuilder<> IRB(&I); 2525 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2526 // Otherwise perform the same shift on S1. 2527 Value *S1 = getShadow(&I, 0); 2528 Value *S2 = getShadow(&I, 1); 2529 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2530 S2->getType()); 2531 Value *V2 = I.getOperand(1); 2532 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2533 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2534 setOriginForNaryOp(I); 2535 } 2536 2537 void visitShl(BinaryOperator &I) { handleShift(I); } 2538 void visitAShr(BinaryOperator &I) { handleShift(I); } 2539 void visitLShr(BinaryOperator &I) { handleShift(I); } 2540 2541 void handleFunnelShift(IntrinsicInst &I) { 2542 IRBuilder<> IRB(&I); 2543 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2544 // Otherwise perform the same shift on S0 and S1. 2545 Value *S0 = getShadow(&I, 0); 2546 Value *S1 = getShadow(&I, 1); 2547 Value *S2 = getShadow(&I, 2); 2548 Value *S2Conv = 2549 IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), S2->getType()); 2550 Value *V2 = I.getOperand(2); 2551 Function *Intrin = Intrinsic::getDeclaration( 2552 I.getModule(), I.getIntrinsicID(), S2Conv->getType()); 2553 Value *Shift = IRB.CreateCall(Intrin, {S0, S1, V2}); 2554 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2555 setOriginForNaryOp(I); 2556 } 2557 2558 /// Instrument llvm.memmove 2559 /// 2560 /// At this point we don't know if llvm.memmove will be inlined or not. 2561 /// If we don't instrument it and it gets inlined, 2562 /// our interceptor will not kick in and we will lose the memmove. 2563 /// If we instrument the call here, but it does not get inlined, 2564 /// we will memove the shadow twice: which is bad in case 2565 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2566 /// 2567 /// Similar situation exists for memcpy and memset. 2568 void visitMemMoveInst(MemMoveInst &I) { 2569 IRBuilder<> IRB(&I); 2570 IRB.CreateCall( 2571 MS.MemmoveFn, 2572 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2573 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2574 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2575 I.eraseFromParent(); 2576 } 2577 2578 // Similar to memmove: avoid copying shadow twice. 2579 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2580 // FIXME: consider doing manual inline for small constant sizes and proper 2581 // alignment. 2582 void visitMemCpyInst(MemCpyInst &I) { 2583 IRBuilder<> IRB(&I); 2584 IRB.CreateCall( 2585 MS.MemcpyFn, 2586 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2587 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2588 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2589 I.eraseFromParent(); 2590 } 2591 2592 // Same as memcpy. 2593 void visitMemSetInst(MemSetInst &I) { 2594 IRBuilder<> IRB(&I); 2595 IRB.CreateCall( 2596 MS.MemsetFn, 2597 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2598 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2599 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2600 I.eraseFromParent(); 2601 } 2602 2603 void visitVAStartInst(VAStartInst &I) { 2604 VAHelper->visitVAStartInst(I); 2605 } 2606 2607 void visitVACopyInst(VACopyInst &I) { 2608 VAHelper->visitVACopyInst(I); 2609 } 2610 2611 /// Handle vector store-like intrinsics. 2612 /// 2613 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2614 /// has 1 pointer argument and 1 vector argument, returns void. 2615 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2616 IRBuilder<> IRB(&I); 2617 Value* Addr = I.getArgOperand(0); 2618 Value *Shadow = getShadow(&I, 1); 2619 Value *ShadowPtr, *OriginPtr; 2620 2621 // We don't know the pointer alignment (could be unaligned SSE store!). 2622 // Have to assume to worst case. 2623 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2624 Addr, IRB, Shadow->getType(), Align(1), /*isStore*/ true); 2625 IRB.CreateAlignedStore(Shadow, ShadowPtr, Align(1)); 2626 2627 if (ClCheckAccessAddress) 2628 insertShadowCheck(Addr, &I); 2629 2630 // FIXME: factor out common code from materializeStores 2631 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2632 return true; 2633 } 2634 2635 /// Handle vector load-like intrinsics. 2636 /// 2637 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2638 /// has 1 pointer argument, returns a vector. 2639 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2640 IRBuilder<> IRB(&I); 2641 Value *Addr = I.getArgOperand(0); 2642 2643 Type *ShadowTy = getShadowTy(&I); 2644 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 2645 if (PropagateShadow) { 2646 // We don't know the pointer alignment (could be unaligned SSE load!). 2647 // Have to assume to worst case. 2648 const Align Alignment = Align(1); 2649 std::tie(ShadowPtr, OriginPtr) = 2650 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2651 setShadow(&I, 2652 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2653 } else { 2654 setShadow(&I, getCleanShadow(&I)); 2655 } 2656 2657 if (ClCheckAccessAddress) 2658 insertShadowCheck(Addr, &I); 2659 2660 if (MS.TrackOrigins) { 2661 if (PropagateShadow) 2662 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2663 else 2664 setOrigin(&I, getCleanOrigin()); 2665 } 2666 return true; 2667 } 2668 2669 /// Handle (SIMD arithmetic)-like intrinsics. 2670 /// 2671 /// Instrument intrinsics with any number of arguments of the same type, 2672 /// equal to the return type. The type should be simple (no aggregates or 2673 /// pointers; vectors are fine). 2674 /// Caller guarantees that this intrinsic does not access memory. 2675 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2676 Type *RetTy = I.getType(); 2677 if (!(RetTy->isIntOrIntVectorTy() || 2678 RetTy->isFPOrFPVectorTy() || 2679 RetTy->isX86_MMXTy())) 2680 return false; 2681 2682 unsigned NumArgOperands = I.arg_size(); 2683 for (unsigned i = 0; i < NumArgOperands; ++i) { 2684 Type *Ty = I.getArgOperand(i)->getType(); 2685 if (Ty != RetTy) 2686 return false; 2687 } 2688 2689 IRBuilder<> IRB(&I); 2690 ShadowAndOriginCombiner SC(this, IRB); 2691 for (unsigned i = 0; i < NumArgOperands; ++i) 2692 SC.Add(I.getArgOperand(i)); 2693 SC.Done(&I); 2694 2695 return true; 2696 } 2697 2698 /// Heuristically instrument unknown intrinsics. 2699 /// 2700 /// The main purpose of this code is to do something reasonable with all 2701 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2702 /// We recognize several classes of intrinsics by their argument types and 2703 /// ModRefBehaviour and apply special instrumentation when we are reasonably 2704 /// sure that we know what the intrinsic does. 2705 /// 2706 /// We special-case intrinsics where this approach fails. See llvm.bswap 2707 /// handling as an example of that. 2708 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2709 unsigned NumArgOperands = I.arg_size(); 2710 if (NumArgOperands == 0) 2711 return false; 2712 2713 if (NumArgOperands == 2 && 2714 I.getArgOperand(0)->getType()->isPointerTy() && 2715 I.getArgOperand(1)->getType()->isVectorTy() && 2716 I.getType()->isVoidTy() && 2717 !I.onlyReadsMemory()) { 2718 // This looks like a vector store. 2719 return handleVectorStoreIntrinsic(I); 2720 } 2721 2722 if (NumArgOperands == 1 && 2723 I.getArgOperand(0)->getType()->isPointerTy() && 2724 I.getType()->isVectorTy() && 2725 I.onlyReadsMemory()) { 2726 // This looks like a vector load. 2727 return handleVectorLoadIntrinsic(I); 2728 } 2729 2730 if (I.doesNotAccessMemory()) 2731 if (maybeHandleSimpleNomemIntrinsic(I)) 2732 return true; 2733 2734 // FIXME: detect and handle SSE maskstore/maskload 2735 return false; 2736 } 2737 2738 void handleInvariantGroup(IntrinsicInst &I) { 2739 setShadow(&I, getShadow(&I, 0)); 2740 setOrigin(&I, getOrigin(&I, 0)); 2741 } 2742 2743 void handleLifetimeStart(IntrinsicInst &I) { 2744 if (!PoisonStack) 2745 return; 2746 AllocaInst *AI = llvm::findAllocaForValue(I.getArgOperand(1)); 2747 if (!AI) 2748 InstrumentLifetimeStart = false; 2749 LifetimeStartList.push_back(std::make_pair(&I, AI)); 2750 } 2751 2752 void handleBswap(IntrinsicInst &I) { 2753 IRBuilder<> IRB(&I); 2754 Value *Op = I.getArgOperand(0); 2755 Type *OpType = Op->getType(); 2756 Function *BswapFunc = Intrinsic::getDeclaration( 2757 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2758 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2759 setOrigin(&I, getOrigin(Op)); 2760 } 2761 2762 // Instrument vector convert intrinsic. 2763 // 2764 // This function instruments intrinsics like cvtsi2ss: 2765 // %Out = int_xxx_cvtyyy(%ConvertOp) 2766 // or 2767 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2768 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2769 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2770 // elements from \p CopyOp. 2771 // In most cases conversion involves floating-point value which may trigger a 2772 // hardware exception when not fully initialized. For this reason we require 2773 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2774 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2775 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2776 // return a fully initialized value. 2777 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements, 2778 bool HasRoundingMode = false) { 2779 IRBuilder<> IRB(&I); 2780 Value *CopyOp, *ConvertOp; 2781 2782 assert((!HasRoundingMode || 2783 isa<ConstantInt>(I.getArgOperand(I.arg_size() - 1))) && 2784 "Invalid rounding mode"); 2785 2786 switch (I.arg_size() - HasRoundingMode) { 2787 case 2: 2788 CopyOp = I.getArgOperand(0); 2789 ConvertOp = I.getArgOperand(1); 2790 break; 2791 case 1: 2792 ConvertOp = I.getArgOperand(0); 2793 CopyOp = nullptr; 2794 break; 2795 default: 2796 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2797 } 2798 2799 // The first *NumUsedElements* elements of ConvertOp are converted to the 2800 // same number of output elements. The rest of the output is copied from 2801 // CopyOp, or (if not available) filled with zeroes. 2802 // Combine shadow for elements of ConvertOp that are used in this operation, 2803 // and insert a check. 2804 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2805 // int->any conversion. 2806 Value *ConvertShadow = getShadow(ConvertOp); 2807 Value *AggShadow = nullptr; 2808 if (ConvertOp->getType()->isVectorTy()) { 2809 AggShadow = IRB.CreateExtractElement( 2810 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2811 for (int i = 1; i < NumUsedElements; ++i) { 2812 Value *MoreShadow = IRB.CreateExtractElement( 2813 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2814 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2815 } 2816 } else { 2817 AggShadow = ConvertShadow; 2818 } 2819 assert(AggShadow->getType()->isIntegerTy()); 2820 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2821 2822 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2823 // ConvertOp. 2824 if (CopyOp) { 2825 assert(CopyOp->getType() == I.getType()); 2826 assert(CopyOp->getType()->isVectorTy()); 2827 Value *ResultShadow = getShadow(CopyOp); 2828 Type *EltTy = cast<VectorType>(ResultShadow->getType())->getElementType(); 2829 for (int i = 0; i < NumUsedElements; ++i) { 2830 ResultShadow = IRB.CreateInsertElement( 2831 ResultShadow, ConstantInt::getNullValue(EltTy), 2832 ConstantInt::get(IRB.getInt32Ty(), i)); 2833 } 2834 setShadow(&I, ResultShadow); 2835 setOrigin(&I, getOrigin(CopyOp)); 2836 } else { 2837 setShadow(&I, getCleanShadow(&I)); 2838 setOrigin(&I, getCleanOrigin()); 2839 } 2840 } 2841 2842 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2843 // zeroes if it is zero, and all ones otherwise. 2844 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2845 if (S->getType()->isVectorTy()) 2846 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2847 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2848 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2849 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2850 } 2851 2852 // Given a vector, extract its first element, and return all 2853 // zeroes if it is zero, and all ones otherwise. 2854 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2855 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2856 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2857 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2858 } 2859 2860 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2861 Type *T = S->getType(); 2862 assert(T->isVectorTy()); 2863 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2864 return IRB.CreateSExt(S2, T); 2865 } 2866 2867 // Instrument vector shift intrinsic. 2868 // 2869 // This function instruments intrinsics like int_x86_avx2_psll_w. 2870 // Intrinsic shifts %In by %ShiftSize bits. 2871 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2872 // size, and the rest is ignored. Behavior is defined even if shift size is 2873 // greater than register (or field) width. 2874 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2875 assert(I.arg_size() == 2); 2876 IRBuilder<> IRB(&I); 2877 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2878 // Otherwise perform the same shift on S1. 2879 Value *S1 = getShadow(&I, 0); 2880 Value *S2 = getShadow(&I, 1); 2881 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2882 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2883 Value *V1 = I.getOperand(0); 2884 Value *V2 = I.getOperand(1); 2885 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledOperand(), 2886 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2887 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2888 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2889 setOriginForNaryOp(I); 2890 } 2891 2892 // Get an X86_MMX-sized vector type. 2893 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2894 const unsigned X86_MMXSizeInBits = 64; 2895 assert(EltSizeInBits != 0 && (X86_MMXSizeInBits % EltSizeInBits) == 0 && 2896 "Illegal MMX vector element size"); 2897 return FixedVectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2898 X86_MMXSizeInBits / EltSizeInBits); 2899 } 2900 2901 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2902 // intrinsic. 2903 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2904 switch (id) { 2905 case Intrinsic::x86_sse2_packsswb_128: 2906 case Intrinsic::x86_sse2_packuswb_128: 2907 return Intrinsic::x86_sse2_packsswb_128; 2908 2909 case Intrinsic::x86_sse2_packssdw_128: 2910 case Intrinsic::x86_sse41_packusdw: 2911 return Intrinsic::x86_sse2_packssdw_128; 2912 2913 case Intrinsic::x86_avx2_packsswb: 2914 case Intrinsic::x86_avx2_packuswb: 2915 return Intrinsic::x86_avx2_packsswb; 2916 2917 case Intrinsic::x86_avx2_packssdw: 2918 case Intrinsic::x86_avx2_packusdw: 2919 return Intrinsic::x86_avx2_packssdw; 2920 2921 case Intrinsic::x86_mmx_packsswb: 2922 case Intrinsic::x86_mmx_packuswb: 2923 return Intrinsic::x86_mmx_packsswb; 2924 2925 case Intrinsic::x86_mmx_packssdw: 2926 return Intrinsic::x86_mmx_packssdw; 2927 default: 2928 llvm_unreachable("unexpected intrinsic id"); 2929 } 2930 } 2931 2932 // Instrument vector pack intrinsic. 2933 // 2934 // This function instruments intrinsics like x86_mmx_packsswb, that 2935 // packs elements of 2 input vectors into half as many bits with saturation. 2936 // Shadow is propagated with the signed variant of the same intrinsic applied 2937 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2938 // EltSizeInBits is used only for x86mmx arguments. 2939 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2940 assert(I.arg_size() == 2); 2941 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2942 IRBuilder<> IRB(&I); 2943 Value *S1 = getShadow(&I, 0); 2944 Value *S2 = getShadow(&I, 1); 2945 assert(isX86_MMX || S1->getType()->isVectorTy()); 2946 2947 // SExt and ICmpNE below must apply to individual elements of input vectors. 2948 // In case of x86mmx arguments, cast them to appropriate vector types and 2949 // back. 2950 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2951 if (isX86_MMX) { 2952 S1 = IRB.CreateBitCast(S1, T); 2953 S2 = IRB.CreateBitCast(S2, T); 2954 } 2955 Value *S1_ext = IRB.CreateSExt( 2956 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2957 Value *S2_ext = IRB.CreateSExt( 2958 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2959 if (isX86_MMX) { 2960 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2961 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2962 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2963 } 2964 2965 Function *ShadowFn = Intrinsic::getDeclaration( 2966 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2967 2968 Value *S = 2969 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2970 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2971 setShadow(&I, S); 2972 setOriginForNaryOp(I); 2973 } 2974 2975 // Instrument sum-of-absolute-differences intrinsic. 2976 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2977 const unsigned SignificantBitsPerResultElement = 16; 2978 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2979 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2980 unsigned ZeroBitsPerResultElement = 2981 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2982 2983 IRBuilder<> IRB(&I); 2984 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2985 S = IRB.CreateBitCast(S, ResTy); 2986 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2987 ResTy); 2988 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2989 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2990 setShadow(&I, S); 2991 setOriginForNaryOp(I); 2992 } 2993 2994 // Instrument multiply-add intrinsic. 2995 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2996 unsigned EltSizeInBits = 0) { 2997 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2998 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2999 IRBuilder<> IRB(&I); 3000 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3001 S = IRB.CreateBitCast(S, ResTy); 3002 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 3003 ResTy); 3004 S = IRB.CreateBitCast(S, getShadowTy(&I)); 3005 setShadow(&I, S); 3006 setOriginForNaryOp(I); 3007 } 3008 3009 // Instrument compare-packed intrinsic. 3010 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 3011 // all-ones shadow. 3012 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 3013 IRBuilder<> IRB(&I); 3014 Type *ResTy = getShadowTy(&I); 3015 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3016 Value *S = IRB.CreateSExt( 3017 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 3018 setShadow(&I, S); 3019 setOriginForNaryOp(I); 3020 } 3021 3022 // Instrument compare-scalar intrinsic. 3023 // This handles both cmp* intrinsics which return the result in the first 3024 // element of a vector, and comi* which return the result as i32. 3025 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 3026 IRBuilder<> IRB(&I); 3027 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3028 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 3029 setShadow(&I, S); 3030 setOriginForNaryOp(I); 3031 } 3032 3033 // Instrument generic vector reduction intrinsics 3034 // by ORing together all their fields. 3035 void handleVectorReduceIntrinsic(IntrinsicInst &I) { 3036 IRBuilder<> IRB(&I); 3037 Value *S = IRB.CreateOrReduce(getShadow(&I, 0)); 3038 setShadow(&I, S); 3039 setOrigin(&I, getOrigin(&I, 0)); 3040 } 3041 3042 // Instrument vector.reduce.or intrinsic. 3043 // Valid (non-poisoned) set bits in the operand pull low the 3044 // corresponding shadow bits. 3045 void handleVectorReduceOrIntrinsic(IntrinsicInst &I) { 3046 IRBuilder<> IRB(&I); 3047 Value *OperandShadow = getShadow(&I, 0); 3048 Value *OperandUnsetBits = IRB.CreateNot(I.getOperand(0)); 3049 Value *OperandUnsetOrPoison = IRB.CreateOr(OperandUnsetBits, OperandShadow); 3050 // Bit N is clean if any field's bit N is 1 and unpoison 3051 Value *OutShadowMask = IRB.CreateAndReduce(OperandUnsetOrPoison); 3052 // Otherwise, it is clean if every field's bit N is unpoison 3053 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3054 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3055 3056 setShadow(&I, S); 3057 setOrigin(&I, getOrigin(&I, 0)); 3058 } 3059 3060 // Instrument vector.reduce.and intrinsic. 3061 // Valid (non-poisoned) unset bits in the operand pull down the 3062 // corresponding shadow bits. 3063 void handleVectorReduceAndIntrinsic(IntrinsicInst &I) { 3064 IRBuilder<> IRB(&I); 3065 Value *OperandShadow = getShadow(&I, 0); 3066 Value *OperandSetOrPoison = IRB.CreateOr(I.getOperand(0), OperandShadow); 3067 // Bit N is clean if any field's bit N is 0 and unpoison 3068 Value *OutShadowMask = IRB.CreateAndReduce(OperandSetOrPoison); 3069 // Otherwise, it is clean if every field's bit N is unpoison 3070 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3071 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3072 3073 setShadow(&I, S); 3074 setOrigin(&I, getOrigin(&I, 0)); 3075 } 3076 3077 void handleStmxcsr(IntrinsicInst &I) { 3078 IRBuilder<> IRB(&I); 3079 Value* Addr = I.getArgOperand(0); 3080 Type *Ty = IRB.getInt32Ty(); 3081 Value *ShadowPtr = 3082 getShadowOriginPtr(Addr, IRB, Ty, Align(1), /*isStore*/ true).first; 3083 3084 IRB.CreateStore(getCleanShadow(Ty), 3085 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 3086 3087 if (ClCheckAccessAddress) 3088 insertShadowCheck(Addr, &I); 3089 } 3090 3091 void handleLdmxcsr(IntrinsicInst &I) { 3092 if (!InsertChecks) return; 3093 3094 IRBuilder<> IRB(&I); 3095 Value *Addr = I.getArgOperand(0); 3096 Type *Ty = IRB.getInt32Ty(); 3097 const Align Alignment = Align(1); 3098 Value *ShadowPtr, *OriginPtr; 3099 std::tie(ShadowPtr, OriginPtr) = 3100 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 3101 3102 if (ClCheckAccessAddress) 3103 insertShadowCheck(Addr, &I); 3104 3105 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 3106 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 3107 : getCleanOrigin(); 3108 insertShadowCheck(Shadow, Origin, &I); 3109 } 3110 3111 void handleMaskedStore(IntrinsicInst &I) { 3112 IRBuilder<> IRB(&I); 3113 Value *V = I.getArgOperand(0); 3114 Value *Addr = I.getArgOperand(1); 3115 const Align Alignment( 3116 cast<ConstantInt>(I.getArgOperand(2))->getZExtValue()); 3117 Value *Mask = I.getArgOperand(3); 3118 Value *Shadow = getShadow(V); 3119 3120 Value *ShadowPtr; 3121 Value *OriginPtr; 3122 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 3123 Addr, IRB, Shadow->getType(), Alignment, /*isStore*/ true); 3124 3125 if (ClCheckAccessAddress) { 3126 insertShadowCheck(Addr, &I); 3127 // Uninitialized mask is kind of like uninitialized address, but not as 3128 // scary. 3129 insertShadowCheck(Mask, &I); 3130 } 3131 3132 IRB.CreateMaskedStore(Shadow, ShadowPtr, Alignment, Mask); 3133 3134 if (MS.TrackOrigins) { 3135 auto &DL = F.getParent()->getDataLayout(); 3136 paintOrigin(IRB, getOrigin(V), OriginPtr, 3137 DL.getTypeStoreSize(Shadow->getType()), 3138 std::max(Alignment, kMinOriginAlignment)); 3139 } 3140 } 3141 3142 bool handleMaskedLoad(IntrinsicInst &I) { 3143 IRBuilder<> IRB(&I); 3144 Value *Addr = I.getArgOperand(0); 3145 const Align Alignment( 3146 cast<ConstantInt>(I.getArgOperand(1))->getZExtValue()); 3147 Value *Mask = I.getArgOperand(2); 3148 Value *PassThru = I.getArgOperand(3); 3149 3150 Type *ShadowTy = getShadowTy(&I); 3151 Value *ShadowPtr, *OriginPtr; 3152 if (PropagateShadow) { 3153 std::tie(ShadowPtr, OriginPtr) = 3154 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 3155 setShadow(&I, IRB.CreateMaskedLoad(ShadowTy, ShadowPtr, Alignment, Mask, 3156 getShadow(PassThru), "_msmaskedld")); 3157 } else { 3158 setShadow(&I, getCleanShadow(&I)); 3159 } 3160 3161 if (ClCheckAccessAddress) { 3162 insertShadowCheck(Addr, &I); 3163 insertShadowCheck(Mask, &I); 3164 } 3165 3166 if (MS.TrackOrigins) { 3167 if (PropagateShadow) { 3168 // Choose between PassThru's and the loaded value's origins. 3169 Value *MaskedPassThruShadow = IRB.CreateAnd( 3170 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 3171 3172 Value *Acc = IRB.CreateExtractElement( 3173 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 3174 for (int i = 1, N = cast<FixedVectorType>(PassThru->getType()) 3175 ->getNumElements(); 3176 i < N; ++i) { 3177 Value *More = IRB.CreateExtractElement( 3178 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 3179 Acc = IRB.CreateOr(Acc, More); 3180 } 3181 3182 Value *Origin = IRB.CreateSelect( 3183 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 3184 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 3185 3186 setOrigin(&I, Origin); 3187 } else { 3188 setOrigin(&I, getCleanOrigin()); 3189 } 3190 } 3191 return true; 3192 } 3193 3194 // Instrument BMI / BMI2 intrinsics. 3195 // All of these intrinsics are Z = I(X, Y) 3196 // where the types of all operands and the result match, and are either i32 or i64. 3197 // The following instrumentation happens to work for all of them: 3198 // Sz = I(Sx, Y) | (sext (Sy != 0)) 3199 void handleBmiIntrinsic(IntrinsicInst &I) { 3200 IRBuilder<> IRB(&I); 3201 Type *ShadowTy = getShadowTy(&I); 3202 3203 // If any bit of the mask operand is poisoned, then the whole thing is. 3204 Value *SMask = getShadow(&I, 1); 3205 SMask = IRB.CreateSExt(IRB.CreateICmpNE(SMask, getCleanShadow(ShadowTy)), 3206 ShadowTy); 3207 // Apply the same intrinsic to the shadow of the first operand. 3208 Value *S = IRB.CreateCall(I.getCalledFunction(), 3209 {getShadow(&I, 0), I.getOperand(1)}); 3210 S = IRB.CreateOr(SMask, S); 3211 setShadow(&I, S); 3212 setOriginForNaryOp(I); 3213 } 3214 3215 SmallVector<int, 8> getPclmulMask(unsigned Width, bool OddElements) { 3216 SmallVector<int, 8> Mask; 3217 for (unsigned X = OddElements ? 1 : 0; X < Width; X += 2) { 3218 Mask.append(2, X); 3219 } 3220 return Mask; 3221 } 3222 3223 // Instrument pclmul intrinsics. 3224 // These intrinsics operate either on odd or on even elements of the input 3225 // vectors, depending on the constant in the 3rd argument, ignoring the rest. 3226 // Replace the unused elements with copies of the used ones, ex: 3227 // (0, 1, 2, 3) -> (0, 0, 2, 2) (even case) 3228 // or 3229 // (0, 1, 2, 3) -> (1, 1, 3, 3) (odd case) 3230 // and then apply the usual shadow combining logic. 3231 void handlePclmulIntrinsic(IntrinsicInst &I) { 3232 IRBuilder<> IRB(&I); 3233 unsigned Width = 3234 cast<FixedVectorType>(I.getArgOperand(0)->getType())->getNumElements(); 3235 assert(isa<ConstantInt>(I.getArgOperand(2)) && 3236 "pclmul 3rd operand must be a constant"); 3237 unsigned Imm = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 3238 Value *Shuf0 = IRB.CreateShuffleVector(getShadow(&I, 0), 3239 getPclmulMask(Width, Imm & 0x01)); 3240 Value *Shuf1 = IRB.CreateShuffleVector(getShadow(&I, 1), 3241 getPclmulMask(Width, Imm & 0x10)); 3242 ShadowAndOriginCombiner SOC(this, IRB); 3243 SOC.Add(Shuf0, getOrigin(&I, 0)); 3244 SOC.Add(Shuf1, getOrigin(&I, 1)); 3245 SOC.Done(&I); 3246 } 3247 3248 // Instrument _mm_*_sd intrinsics 3249 void handleUnarySdIntrinsic(IntrinsicInst &I) { 3250 IRBuilder<> IRB(&I); 3251 Value *First = getShadow(&I, 0); 3252 Value *Second = getShadow(&I, 1); 3253 // High word of first operand, low word of second 3254 Value *Shadow = 3255 IRB.CreateShuffleVector(First, Second, llvm::makeArrayRef<int>({2, 1})); 3256 3257 setShadow(&I, Shadow); 3258 setOriginForNaryOp(I); 3259 } 3260 3261 void handleBinarySdIntrinsic(IntrinsicInst &I) { 3262 IRBuilder<> IRB(&I); 3263 Value *First = getShadow(&I, 0); 3264 Value *Second = getShadow(&I, 1); 3265 Value *OrShadow = IRB.CreateOr(First, Second); 3266 // High word of first operand, low word of both OR'd together 3267 Value *Shadow = IRB.CreateShuffleVector(First, OrShadow, 3268 llvm::makeArrayRef<int>({2, 1})); 3269 3270 setShadow(&I, Shadow); 3271 setOriginForNaryOp(I); 3272 } 3273 3274 // Instrument abs intrinsic. 3275 // handleUnknownIntrinsic can't handle it because of the last 3276 // is_int_min_poison argument which does not match the result type. 3277 void handleAbsIntrinsic(IntrinsicInst &I) { 3278 assert(I.getType()->isIntOrIntVectorTy()); 3279 assert(I.getArgOperand(0)->getType() == I.getType()); 3280 3281 // FIXME: Handle is_int_min_poison. 3282 IRBuilder<> IRB(&I); 3283 setShadow(&I, getShadow(&I, 0)); 3284 setOrigin(&I, getOrigin(&I, 0)); 3285 } 3286 3287 void visitIntrinsicInst(IntrinsicInst &I) { 3288 switch (I.getIntrinsicID()) { 3289 case Intrinsic::abs: 3290 handleAbsIntrinsic(I); 3291 break; 3292 case Intrinsic::lifetime_start: 3293 handleLifetimeStart(I); 3294 break; 3295 case Intrinsic::launder_invariant_group: 3296 case Intrinsic::strip_invariant_group: 3297 handleInvariantGroup(I); 3298 break; 3299 case Intrinsic::bswap: 3300 handleBswap(I); 3301 break; 3302 case Intrinsic::masked_store: 3303 handleMaskedStore(I); 3304 break; 3305 case Intrinsic::masked_load: 3306 handleMaskedLoad(I); 3307 break; 3308 case Intrinsic::vector_reduce_and: 3309 handleVectorReduceAndIntrinsic(I); 3310 break; 3311 case Intrinsic::vector_reduce_or: 3312 handleVectorReduceOrIntrinsic(I); 3313 break; 3314 case Intrinsic::vector_reduce_add: 3315 case Intrinsic::vector_reduce_xor: 3316 case Intrinsic::vector_reduce_mul: 3317 handleVectorReduceIntrinsic(I); 3318 break; 3319 case Intrinsic::x86_sse_stmxcsr: 3320 handleStmxcsr(I); 3321 break; 3322 case Intrinsic::x86_sse_ldmxcsr: 3323 handleLdmxcsr(I); 3324 break; 3325 case Intrinsic::x86_avx512_vcvtsd2usi64: 3326 case Intrinsic::x86_avx512_vcvtsd2usi32: 3327 case Intrinsic::x86_avx512_vcvtss2usi64: 3328 case Intrinsic::x86_avx512_vcvtss2usi32: 3329 case Intrinsic::x86_avx512_cvttss2usi64: 3330 case Intrinsic::x86_avx512_cvttss2usi: 3331 case Intrinsic::x86_avx512_cvttsd2usi64: 3332 case Intrinsic::x86_avx512_cvttsd2usi: 3333 case Intrinsic::x86_avx512_cvtusi2ss: 3334 case Intrinsic::x86_avx512_cvtusi642sd: 3335 case Intrinsic::x86_avx512_cvtusi642ss: 3336 handleVectorConvertIntrinsic(I, 1, true); 3337 break; 3338 case Intrinsic::x86_sse2_cvtsd2si64: 3339 case Intrinsic::x86_sse2_cvtsd2si: 3340 case Intrinsic::x86_sse2_cvtsd2ss: 3341 case Intrinsic::x86_sse2_cvttsd2si64: 3342 case Intrinsic::x86_sse2_cvttsd2si: 3343 case Intrinsic::x86_sse_cvtss2si64: 3344 case Intrinsic::x86_sse_cvtss2si: 3345 case Intrinsic::x86_sse_cvttss2si64: 3346 case Intrinsic::x86_sse_cvttss2si: 3347 handleVectorConvertIntrinsic(I, 1); 3348 break; 3349 case Intrinsic::x86_sse_cvtps2pi: 3350 case Intrinsic::x86_sse_cvttps2pi: 3351 handleVectorConvertIntrinsic(I, 2); 3352 break; 3353 3354 case Intrinsic::x86_avx512_psll_w_512: 3355 case Intrinsic::x86_avx512_psll_d_512: 3356 case Intrinsic::x86_avx512_psll_q_512: 3357 case Intrinsic::x86_avx512_pslli_w_512: 3358 case Intrinsic::x86_avx512_pslli_d_512: 3359 case Intrinsic::x86_avx512_pslli_q_512: 3360 case Intrinsic::x86_avx512_psrl_w_512: 3361 case Intrinsic::x86_avx512_psrl_d_512: 3362 case Intrinsic::x86_avx512_psrl_q_512: 3363 case Intrinsic::x86_avx512_psra_w_512: 3364 case Intrinsic::x86_avx512_psra_d_512: 3365 case Intrinsic::x86_avx512_psra_q_512: 3366 case Intrinsic::x86_avx512_psrli_w_512: 3367 case Intrinsic::x86_avx512_psrli_d_512: 3368 case Intrinsic::x86_avx512_psrli_q_512: 3369 case Intrinsic::x86_avx512_psrai_w_512: 3370 case Intrinsic::x86_avx512_psrai_d_512: 3371 case Intrinsic::x86_avx512_psrai_q_512: 3372 case Intrinsic::x86_avx512_psra_q_256: 3373 case Intrinsic::x86_avx512_psra_q_128: 3374 case Intrinsic::x86_avx512_psrai_q_256: 3375 case Intrinsic::x86_avx512_psrai_q_128: 3376 case Intrinsic::x86_avx2_psll_w: 3377 case Intrinsic::x86_avx2_psll_d: 3378 case Intrinsic::x86_avx2_psll_q: 3379 case Intrinsic::x86_avx2_pslli_w: 3380 case Intrinsic::x86_avx2_pslli_d: 3381 case Intrinsic::x86_avx2_pslli_q: 3382 case Intrinsic::x86_avx2_psrl_w: 3383 case Intrinsic::x86_avx2_psrl_d: 3384 case Intrinsic::x86_avx2_psrl_q: 3385 case Intrinsic::x86_avx2_psra_w: 3386 case Intrinsic::x86_avx2_psra_d: 3387 case Intrinsic::x86_avx2_psrli_w: 3388 case Intrinsic::x86_avx2_psrli_d: 3389 case Intrinsic::x86_avx2_psrli_q: 3390 case Intrinsic::x86_avx2_psrai_w: 3391 case Intrinsic::x86_avx2_psrai_d: 3392 case Intrinsic::x86_sse2_psll_w: 3393 case Intrinsic::x86_sse2_psll_d: 3394 case Intrinsic::x86_sse2_psll_q: 3395 case Intrinsic::x86_sse2_pslli_w: 3396 case Intrinsic::x86_sse2_pslli_d: 3397 case Intrinsic::x86_sse2_pslli_q: 3398 case Intrinsic::x86_sse2_psrl_w: 3399 case Intrinsic::x86_sse2_psrl_d: 3400 case Intrinsic::x86_sse2_psrl_q: 3401 case Intrinsic::x86_sse2_psra_w: 3402 case Intrinsic::x86_sse2_psra_d: 3403 case Intrinsic::x86_sse2_psrli_w: 3404 case Intrinsic::x86_sse2_psrli_d: 3405 case Intrinsic::x86_sse2_psrli_q: 3406 case Intrinsic::x86_sse2_psrai_w: 3407 case Intrinsic::x86_sse2_psrai_d: 3408 case Intrinsic::x86_mmx_psll_w: 3409 case Intrinsic::x86_mmx_psll_d: 3410 case Intrinsic::x86_mmx_psll_q: 3411 case Intrinsic::x86_mmx_pslli_w: 3412 case Intrinsic::x86_mmx_pslli_d: 3413 case Intrinsic::x86_mmx_pslli_q: 3414 case Intrinsic::x86_mmx_psrl_w: 3415 case Intrinsic::x86_mmx_psrl_d: 3416 case Intrinsic::x86_mmx_psrl_q: 3417 case Intrinsic::x86_mmx_psra_w: 3418 case Intrinsic::x86_mmx_psra_d: 3419 case Intrinsic::x86_mmx_psrli_w: 3420 case Intrinsic::x86_mmx_psrli_d: 3421 case Intrinsic::x86_mmx_psrli_q: 3422 case Intrinsic::x86_mmx_psrai_w: 3423 case Intrinsic::x86_mmx_psrai_d: 3424 handleVectorShiftIntrinsic(I, /* Variable */ false); 3425 break; 3426 case Intrinsic::x86_avx2_psllv_d: 3427 case Intrinsic::x86_avx2_psllv_d_256: 3428 case Intrinsic::x86_avx512_psllv_d_512: 3429 case Intrinsic::x86_avx2_psllv_q: 3430 case Intrinsic::x86_avx2_psllv_q_256: 3431 case Intrinsic::x86_avx512_psllv_q_512: 3432 case Intrinsic::x86_avx2_psrlv_d: 3433 case Intrinsic::x86_avx2_psrlv_d_256: 3434 case Intrinsic::x86_avx512_psrlv_d_512: 3435 case Intrinsic::x86_avx2_psrlv_q: 3436 case Intrinsic::x86_avx2_psrlv_q_256: 3437 case Intrinsic::x86_avx512_psrlv_q_512: 3438 case Intrinsic::x86_avx2_psrav_d: 3439 case Intrinsic::x86_avx2_psrav_d_256: 3440 case Intrinsic::x86_avx512_psrav_d_512: 3441 case Intrinsic::x86_avx512_psrav_q_128: 3442 case Intrinsic::x86_avx512_psrav_q_256: 3443 case Intrinsic::x86_avx512_psrav_q_512: 3444 handleVectorShiftIntrinsic(I, /* Variable */ true); 3445 break; 3446 3447 case Intrinsic::x86_sse2_packsswb_128: 3448 case Intrinsic::x86_sse2_packssdw_128: 3449 case Intrinsic::x86_sse2_packuswb_128: 3450 case Intrinsic::x86_sse41_packusdw: 3451 case Intrinsic::x86_avx2_packsswb: 3452 case Intrinsic::x86_avx2_packssdw: 3453 case Intrinsic::x86_avx2_packuswb: 3454 case Intrinsic::x86_avx2_packusdw: 3455 handleVectorPackIntrinsic(I); 3456 break; 3457 3458 case Intrinsic::x86_mmx_packsswb: 3459 case Intrinsic::x86_mmx_packuswb: 3460 handleVectorPackIntrinsic(I, 16); 3461 break; 3462 3463 case Intrinsic::x86_mmx_packssdw: 3464 handleVectorPackIntrinsic(I, 32); 3465 break; 3466 3467 case Intrinsic::x86_mmx_psad_bw: 3468 case Intrinsic::x86_sse2_psad_bw: 3469 case Intrinsic::x86_avx2_psad_bw: 3470 handleVectorSadIntrinsic(I); 3471 break; 3472 3473 case Intrinsic::x86_sse2_pmadd_wd: 3474 case Intrinsic::x86_avx2_pmadd_wd: 3475 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3476 case Intrinsic::x86_avx2_pmadd_ub_sw: 3477 handleVectorPmaddIntrinsic(I); 3478 break; 3479 3480 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3481 handleVectorPmaddIntrinsic(I, 8); 3482 break; 3483 3484 case Intrinsic::x86_mmx_pmadd_wd: 3485 handleVectorPmaddIntrinsic(I, 16); 3486 break; 3487 3488 case Intrinsic::x86_sse_cmp_ss: 3489 case Intrinsic::x86_sse2_cmp_sd: 3490 case Intrinsic::x86_sse_comieq_ss: 3491 case Intrinsic::x86_sse_comilt_ss: 3492 case Intrinsic::x86_sse_comile_ss: 3493 case Intrinsic::x86_sse_comigt_ss: 3494 case Intrinsic::x86_sse_comige_ss: 3495 case Intrinsic::x86_sse_comineq_ss: 3496 case Intrinsic::x86_sse_ucomieq_ss: 3497 case Intrinsic::x86_sse_ucomilt_ss: 3498 case Intrinsic::x86_sse_ucomile_ss: 3499 case Intrinsic::x86_sse_ucomigt_ss: 3500 case Intrinsic::x86_sse_ucomige_ss: 3501 case Intrinsic::x86_sse_ucomineq_ss: 3502 case Intrinsic::x86_sse2_comieq_sd: 3503 case Intrinsic::x86_sse2_comilt_sd: 3504 case Intrinsic::x86_sse2_comile_sd: 3505 case Intrinsic::x86_sse2_comigt_sd: 3506 case Intrinsic::x86_sse2_comige_sd: 3507 case Intrinsic::x86_sse2_comineq_sd: 3508 case Intrinsic::x86_sse2_ucomieq_sd: 3509 case Intrinsic::x86_sse2_ucomilt_sd: 3510 case Intrinsic::x86_sse2_ucomile_sd: 3511 case Intrinsic::x86_sse2_ucomigt_sd: 3512 case Intrinsic::x86_sse2_ucomige_sd: 3513 case Intrinsic::x86_sse2_ucomineq_sd: 3514 handleVectorCompareScalarIntrinsic(I); 3515 break; 3516 3517 case Intrinsic::x86_sse_cmp_ps: 3518 case Intrinsic::x86_sse2_cmp_pd: 3519 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3520 // generates reasonably looking IR that fails in the backend with "Do not 3521 // know how to split the result of this operator!". 3522 handleVectorComparePackedIntrinsic(I); 3523 break; 3524 3525 case Intrinsic::x86_bmi_bextr_32: 3526 case Intrinsic::x86_bmi_bextr_64: 3527 case Intrinsic::x86_bmi_bzhi_32: 3528 case Intrinsic::x86_bmi_bzhi_64: 3529 case Intrinsic::x86_bmi_pdep_32: 3530 case Intrinsic::x86_bmi_pdep_64: 3531 case Intrinsic::x86_bmi_pext_32: 3532 case Intrinsic::x86_bmi_pext_64: 3533 handleBmiIntrinsic(I); 3534 break; 3535 3536 case Intrinsic::x86_pclmulqdq: 3537 case Intrinsic::x86_pclmulqdq_256: 3538 case Intrinsic::x86_pclmulqdq_512: 3539 handlePclmulIntrinsic(I); 3540 break; 3541 3542 case Intrinsic::x86_sse41_round_sd: 3543 handleUnarySdIntrinsic(I); 3544 break; 3545 case Intrinsic::x86_sse2_max_sd: 3546 case Intrinsic::x86_sse2_min_sd: 3547 handleBinarySdIntrinsic(I); 3548 break; 3549 3550 case Intrinsic::fshl: 3551 case Intrinsic::fshr: 3552 handleFunnelShift(I); 3553 break; 3554 3555 case Intrinsic::is_constant: 3556 // The result of llvm.is.constant() is always defined. 3557 setShadow(&I, getCleanShadow(&I)); 3558 setOrigin(&I, getCleanOrigin()); 3559 break; 3560 3561 default: 3562 if (!handleUnknownIntrinsic(I)) 3563 visitInstruction(I); 3564 break; 3565 } 3566 } 3567 3568 void visitLibAtomicLoad(CallBase &CB) { 3569 // Since we use getNextNode here, we can't have CB terminate the BB. 3570 assert(isa<CallInst>(CB)); 3571 3572 IRBuilder<> IRB(&CB); 3573 Value *Size = CB.getArgOperand(0); 3574 Value *SrcPtr = CB.getArgOperand(1); 3575 Value *DstPtr = CB.getArgOperand(2); 3576 Value *Ordering = CB.getArgOperand(3); 3577 // Convert the call to have at least Acquire ordering to make sure 3578 // the shadow operations aren't reordered before it. 3579 Value *NewOrdering = 3580 IRB.CreateExtractElement(makeAddAcquireOrderingTable(IRB), Ordering); 3581 CB.setArgOperand(3, NewOrdering); 3582 3583 IRBuilder<> NextIRB(CB.getNextNode()); 3584 NextIRB.SetCurrentDebugLocation(CB.getDebugLoc()); 3585 3586 Value *SrcShadowPtr, *SrcOriginPtr; 3587 std::tie(SrcShadowPtr, SrcOriginPtr) = 3588 getShadowOriginPtr(SrcPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3589 /*isStore*/ false); 3590 Value *DstShadowPtr = 3591 getShadowOriginPtr(DstPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3592 /*isStore*/ true) 3593 .first; 3594 3595 NextIRB.CreateMemCpy(DstShadowPtr, Align(1), SrcShadowPtr, Align(1), Size); 3596 if (MS.TrackOrigins) { 3597 Value *SrcOrigin = NextIRB.CreateAlignedLoad(MS.OriginTy, SrcOriginPtr, 3598 kMinOriginAlignment); 3599 Value *NewOrigin = updateOrigin(SrcOrigin, NextIRB); 3600 NextIRB.CreateCall(MS.MsanSetOriginFn, {DstPtr, Size, NewOrigin}); 3601 } 3602 } 3603 3604 void visitLibAtomicStore(CallBase &CB) { 3605 IRBuilder<> IRB(&CB); 3606 Value *Size = CB.getArgOperand(0); 3607 Value *DstPtr = CB.getArgOperand(2); 3608 Value *Ordering = CB.getArgOperand(3); 3609 // Convert the call to have at least Release ordering to make sure 3610 // the shadow operations aren't reordered after it. 3611 Value *NewOrdering = 3612 IRB.CreateExtractElement(makeAddReleaseOrderingTable(IRB), Ordering); 3613 CB.setArgOperand(3, NewOrdering); 3614 3615 Value *DstShadowPtr = 3616 getShadowOriginPtr(DstPtr, IRB, IRB.getInt8Ty(), Align(1), 3617 /*isStore*/ true) 3618 .first; 3619 3620 // Atomic store always paints clean shadow/origin. See file header. 3621 IRB.CreateMemSet(DstShadowPtr, getCleanShadow(IRB.getInt8Ty()), Size, 3622 Align(1)); 3623 } 3624 3625 void visitCallBase(CallBase &CB) { 3626 assert(!CB.getMetadata("nosanitize")); 3627 if (CB.isInlineAsm()) { 3628 // For inline asm (either a call to asm function, or callbr instruction), 3629 // do the usual thing: check argument shadow and mark all outputs as 3630 // clean. Note that any side effects of the inline asm that are not 3631 // immediately visible in its constraints are not handled. 3632 if (ClHandleAsmConservative && MS.CompileKernel) 3633 visitAsmInstruction(CB); 3634 else 3635 visitInstruction(CB); 3636 return; 3637 } 3638 LibFunc LF; 3639 if (TLI->getLibFunc(CB, LF)) { 3640 // libatomic.a functions need to have special handling because there isn't 3641 // a good way to intercept them or compile the library with 3642 // instrumentation. 3643 switch (LF) { 3644 case LibFunc_atomic_load: 3645 if (!isa<CallInst>(CB)) { 3646 llvm::errs() << "MSAN -- cannot instrument invoke of libatomic load." 3647 "Ignoring!\n"; 3648 break; 3649 } 3650 visitLibAtomicLoad(CB); 3651 return; 3652 case LibFunc_atomic_store: 3653 visitLibAtomicStore(CB); 3654 return; 3655 default: 3656 break; 3657 } 3658 } 3659 3660 if (auto *Call = dyn_cast<CallInst>(&CB)) { 3661 assert(!isa<IntrinsicInst>(Call) && "intrinsics are handled elsewhere"); 3662 3663 // We are going to insert code that relies on the fact that the callee 3664 // will become a non-readonly function after it is instrumented by us. To 3665 // prevent this code from being optimized out, mark that function 3666 // non-readonly in advance. 3667 AttrBuilder B; 3668 B.addAttribute(Attribute::ReadOnly) 3669 .addAttribute(Attribute::ReadNone) 3670 .addAttribute(Attribute::WriteOnly) 3671 .addAttribute(Attribute::ArgMemOnly) 3672 .addAttribute(Attribute::Speculatable); 3673 3674 Call->removeFnAttrs(B); 3675 if (Function *Func = Call->getCalledFunction()) { 3676 Func->removeFnAttrs(B); 3677 } 3678 3679 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3680 } 3681 IRBuilder<> IRB(&CB); 3682 bool MayCheckCall = ClEagerChecks; 3683 if (Function *Func = CB.getCalledFunction()) { 3684 // __sanitizer_unaligned_{load,store} functions may be called by users 3685 // and always expects shadows in the TLS. So don't check them. 3686 MayCheckCall &= !Func->getName().startswith("__sanitizer_unaligned_"); 3687 } 3688 3689 unsigned ArgOffset = 0; 3690 LLVM_DEBUG(dbgs() << " CallSite: " << CB << "\n"); 3691 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 3692 ++ArgIt) { 3693 Value *A = *ArgIt; 3694 unsigned i = ArgIt - CB.arg_begin(); 3695 if (!A->getType()->isSized()) { 3696 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << CB << "\n"); 3697 continue; 3698 } 3699 unsigned Size = 0; 3700 Value *Store = nullptr; 3701 // Compute the Shadow for arg even if it is ByVal, because 3702 // in that case getShadow() will copy the actual arg shadow to 3703 // __msan_param_tls. 3704 Value *ArgShadow = getShadow(A); 3705 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3706 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3707 << " Shadow: " << *ArgShadow << "\n"); 3708 bool ArgIsInitialized = false; 3709 const DataLayout &DL = F.getParent()->getDataLayout(); 3710 3711 bool ByVal = CB.paramHasAttr(i, Attribute::ByVal); 3712 bool NoUndef = CB.paramHasAttr(i, Attribute::NoUndef); 3713 bool EagerCheck = MayCheckCall && !ByVal && NoUndef; 3714 3715 if (EagerCheck) { 3716 insertShadowCheck(A, &CB); 3717 Size = DL.getTypeAllocSize(A->getType()); 3718 } else { 3719 if (ByVal) { 3720 // ByVal requires some special handling as it's too big for a single 3721 // load 3722 assert(A->getType()->isPointerTy() && 3723 "ByVal argument is not a pointer!"); 3724 Size = DL.getTypeAllocSize(CB.getParamByValType(i)); 3725 if (ArgOffset + Size > kParamTLSSize) 3726 break; 3727 const MaybeAlign ParamAlignment(CB.getParamAlign(i)); 3728 MaybeAlign Alignment = llvm::None; 3729 if (ParamAlignment) 3730 Alignment = std::min(*ParamAlignment, kShadowTLSAlignment); 3731 Value *AShadowPtr = 3732 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3733 /*isStore*/ false) 3734 .first; 3735 3736 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3737 Alignment, Size); 3738 // TODO(glider): need to copy origins. 3739 } else { 3740 // Any other parameters mean we need bit-grained tracking of uninit 3741 // data 3742 Size = DL.getTypeAllocSize(A->getType()); 3743 if (ArgOffset + Size > kParamTLSSize) 3744 break; 3745 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3746 kShadowTLSAlignment); 3747 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3748 if (Cst && Cst->isNullValue()) 3749 ArgIsInitialized = true; 3750 } 3751 if (MS.TrackOrigins && !ArgIsInitialized) 3752 IRB.CreateStore(getOrigin(A), 3753 getOriginPtrForArgument(A, IRB, ArgOffset)); 3754 (void)Store; 3755 assert(Store != nullptr); 3756 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3757 } 3758 assert(Size != 0); 3759 ArgOffset += alignTo(Size, kShadowTLSAlignment); 3760 } 3761 LLVM_DEBUG(dbgs() << " done with call args\n"); 3762 3763 FunctionType *FT = CB.getFunctionType(); 3764 if (FT->isVarArg()) { 3765 VAHelper->visitCallBase(CB, IRB); 3766 } 3767 3768 // Now, get the shadow for the RetVal. 3769 if (!CB.getType()->isSized()) 3770 return; 3771 // Don't emit the epilogue for musttail call returns. 3772 if (isa<CallInst>(CB) && cast<CallInst>(CB).isMustTailCall()) 3773 return; 3774 3775 if (MayCheckCall && CB.hasRetAttr(Attribute::NoUndef)) { 3776 setShadow(&CB, getCleanShadow(&CB)); 3777 setOrigin(&CB, getCleanOrigin()); 3778 return; 3779 } 3780 3781 IRBuilder<> IRBBefore(&CB); 3782 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3783 Value *Base = getShadowPtrForRetval(&CB, IRBBefore); 3784 IRBBefore.CreateAlignedStore(getCleanShadow(&CB), Base, 3785 kShadowTLSAlignment); 3786 BasicBlock::iterator NextInsn; 3787 if (isa<CallInst>(CB)) { 3788 NextInsn = ++CB.getIterator(); 3789 assert(NextInsn != CB.getParent()->end()); 3790 } else { 3791 BasicBlock *NormalDest = cast<InvokeInst>(CB).getNormalDest(); 3792 if (!NormalDest->getSinglePredecessor()) { 3793 // FIXME: this case is tricky, so we are just conservative here. 3794 // Perhaps we need to split the edge between this BB and NormalDest, 3795 // but a naive attempt to use SplitEdge leads to a crash. 3796 setShadow(&CB, getCleanShadow(&CB)); 3797 setOrigin(&CB, getCleanOrigin()); 3798 return; 3799 } 3800 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3801 // Anything inserted there will be instrumented by MSan later! 3802 NextInsn = NormalDest->getFirstInsertionPt(); 3803 assert(NextInsn != NormalDest->end() && 3804 "Could not find insertion point for retval shadow load"); 3805 } 3806 IRBuilder<> IRBAfter(&*NextInsn); 3807 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3808 getShadowTy(&CB), getShadowPtrForRetval(&CB, IRBAfter), 3809 kShadowTLSAlignment, "_msret"); 3810 setShadow(&CB, RetvalShadow); 3811 if (MS.TrackOrigins) 3812 setOrigin(&CB, IRBAfter.CreateLoad(MS.OriginTy, 3813 getOriginPtrForRetval(IRBAfter))); 3814 } 3815 3816 bool isAMustTailRetVal(Value *RetVal) { 3817 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3818 RetVal = I->getOperand(0); 3819 } 3820 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3821 return I->isMustTailCall(); 3822 } 3823 return false; 3824 } 3825 3826 void visitReturnInst(ReturnInst &I) { 3827 IRBuilder<> IRB(&I); 3828 Value *RetVal = I.getReturnValue(); 3829 if (!RetVal) return; 3830 // Don't emit the epilogue for musttail call returns. 3831 if (isAMustTailRetVal(RetVal)) return; 3832 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3833 bool HasNoUndef = 3834 F.hasRetAttribute(Attribute::NoUndef); 3835 bool StoreShadow = !(ClEagerChecks && HasNoUndef); 3836 // FIXME: Consider using SpecialCaseList to specify a list of functions that 3837 // must always return fully initialized values. For now, we hardcode "main". 3838 bool EagerCheck = (ClEagerChecks && HasNoUndef) || (F.getName() == "main"); 3839 3840 Value *Shadow = getShadow(RetVal); 3841 bool StoreOrigin = true; 3842 if (EagerCheck) { 3843 insertShadowCheck(RetVal, &I); 3844 Shadow = getCleanShadow(RetVal); 3845 StoreOrigin = false; 3846 } 3847 3848 // The caller may still expect information passed over TLS if we pass our 3849 // check 3850 if (StoreShadow) { 3851 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3852 if (MS.TrackOrigins && StoreOrigin) 3853 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3854 } 3855 } 3856 3857 void visitPHINode(PHINode &I) { 3858 IRBuilder<> IRB(&I); 3859 if (!PropagateShadow) { 3860 setShadow(&I, getCleanShadow(&I)); 3861 setOrigin(&I, getCleanOrigin()); 3862 return; 3863 } 3864 3865 ShadowPHINodes.push_back(&I); 3866 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3867 "_msphi_s")); 3868 if (MS.TrackOrigins) 3869 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3870 "_msphi_o")); 3871 } 3872 3873 Value *getLocalVarDescription(AllocaInst &I) { 3874 SmallString<2048> StackDescriptionStorage; 3875 raw_svector_ostream StackDescription(StackDescriptionStorage); 3876 // We create a string with a description of the stack allocation and 3877 // pass it into __msan_set_alloca_origin. 3878 // It will be printed by the run-time if stack-originated UMR is found. 3879 // The first 4 bytes of the string are set to '----' and will be replaced 3880 // by __msan_va_arg_overflow_size_tls at the first call. 3881 StackDescription << "----" << I.getName() << "@" << F.getName(); 3882 return createPrivateNonConstGlobalForString(*F.getParent(), 3883 StackDescription.str()); 3884 } 3885 3886 void poisonAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3887 if (PoisonStack && ClPoisonStackWithCall) { 3888 IRB.CreateCall(MS.MsanPoisonStackFn, 3889 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3890 } else { 3891 Value *ShadowBase, *OriginBase; 3892 std::tie(ShadowBase, OriginBase) = getShadowOriginPtr( 3893 &I, IRB, IRB.getInt8Ty(), Align(1), /*isStore*/ true); 3894 3895 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3896 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, I.getAlign()); 3897 } 3898 3899 if (PoisonStack && MS.TrackOrigins) { 3900 Value *Descr = getLocalVarDescription(I); 3901 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3902 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3903 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3904 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3905 } 3906 } 3907 3908 void poisonAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3909 Value *Descr = getLocalVarDescription(I); 3910 if (PoisonStack) { 3911 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3912 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3913 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3914 } else { 3915 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3916 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3917 } 3918 } 3919 3920 void instrumentAlloca(AllocaInst &I, Instruction *InsPoint = nullptr) { 3921 if (!InsPoint) 3922 InsPoint = &I; 3923 IRBuilder<> IRB(InsPoint->getNextNode()); 3924 const DataLayout &DL = F.getParent()->getDataLayout(); 3925 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3926 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3927 if (I.isArrayAllocation()) 3928 Len = IRB.CreateMul(Len, I.getArraySize()); 3929 3930 if (MS.CompileKernel) 3931 poisonAllocaKmsan(I, IRB, Len); 3932 else 3933 poisonAllocaUserspace(I, IRB, Len); 3934 } 3935 3936 void visitAllocaInst(AllocaInst &I) { 3937 setShadow(&I, getCleanShadow(&I)); 3938 setOrigin(&I, getCleanOrigin()); 3939 // We'll get to this alloca later unless it's poisoned at the corresponding 3940 // llvm.lifetime.start. 3941 AllocaSet.insert(&I); 3942 } 3943 3944 void visitSelectInst(SelectInst& I) { 3945 IRBuilder<> IRB(&I); 3946 // a = select b, c, d 3947 Value *B = I.getCondition(); 3948 Value *C = I.getTrueValue(); 3949 Value *D = I.getFalseValue(); 3950 Value *Sb = getShadow(B); 3951 Value *Sc = getShadow(C); 3952 Value *Sd = getShadow(D); 3953 3954 // Result shadow if condition shadow is 0. 3955 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3956 Value *Sa1; 3957 if (I.getType()->isAggregateType()) { 3958 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3959 // an extra "select". This results in much more compact IR. 3960 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3961 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3962 } else { 3963 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3964 // If Sb (condition is poisoned), look for bits in c and d that are equal 3965 // and both unpoisoned. 3966 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3967 3968 // Cast arguments to shadow-compatible type. 3969 C = CreateAppToShadowCast(IRB, C); 3970 D = CreateAppToShadowCast(IRB, D); 3971 3972 // Result shadow if condition shadow is 1. 3973 Sa1 = IRB.CreateOr({IRB.CreateXor(C, D), Sc, Sd}); 3974 } 3975 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3976 setShadow(&I, Sa); 3977 if (MS.TrackOrigins) { 3978 // Origins are always i32, so any vector conditions must be flattened. 3979 // FIXME: consider tracking vector origins for app vectors? 3980 if (B->getType()->isVectorTy()) { 3981 Type *FlatTy = getShadowTyNoVec(B->getType()); 3982 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3983 ConstantInt::getNullValue(FlatTy)); 3984 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3985 ConstantInt::getNullValue(FlatTy)); 3986 } 3987 // a = select b, c, d 3988 // Oa = Sb ? Ob : (b ? Oc : Od) 3989 setOrigin( 3990 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3991 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3992 getOrigin(I.getFalseValue())))); 3993 } 3994 } 3995 3996 void visitLandingPadInst(LandingPadInst &I) { 3997 // Do nothing. 3998 // See https://github.com/google/sanitizers/issues/504 3999 setShadow(&I, getCleanShadow(&I)); 4000 setOrigin(&I, getCleanOrigin()); 4001 } 4002 4003 void visitCatchSwitchInst(CatchSwitchInst &I) { 4004 setShadow(&I, getCleanShadow(&I)); 4005 setOrigin(&I, getCleanOrigin()); 4006 } 4007 4008 void visitFuncletPadInst(FuncletPadInst &I) { 4009 setShadow(&I, getCleanShadow(&I)); 4010 setOrigin(&I, getCleanOrigin()); 4011 } 4012 4013 void visitGetElementPtrInst(GetElementPtrInst &I) { 4014 handleShadowOr(I); 4015 } 4016 4017 void visitExtractValueInst(ExtractValueInst &I) { 4018 IRBuilder<> IRB(&I); 4019 Value *Agg = I.getAggregateOperand(); 4020 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 4021 Value *AggShadow = getShadow(Agg); 4022 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4023 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 4024 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 4025 setShadow(&I, ResShadow); 4026 setOriginForNaryOp(I); 4027 } 4028 4029 void visitInsertValueInst(InsertValueInst &I) { 4030 IRBuilder<> IRB(&I); 4031 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 4032 Value *AggShadow = getShadow(I.getAggregateOperand()); 4033 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 4034 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4035 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 4036 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 4037 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 4038 setShadow(&I, Res); 4039 setOriginForNaryOp(I); 4040 } 4041 4042 void dumpInst(Instruction &I) { 4043 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 4044 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 4045 } else { 4046 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 4047 } 4048 errs() << "QQQ " << I << "\n"; 4049 } 4050 4051 void visitResumeInst(ResumeInst &I) { 4052 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 4053 // Nothing to do here. 4054 } 4055 4056 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 4057 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 4058 // Nothing to do here. 4059 } 4060 4061 void visitCatchReturnInst(CatchReturnInst &CRI) { 4062 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 4063 // Nothing to do here. 4064 } 4065 4066 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 4067 const DataLayout &DL, bool isOutput) { 4068 // For each assembly argument, we check its value for being initialized. 4069 // If the argument is a pointer, we assume it points to a single element 4070 // of the corresponding type (or to a 8-byte word, if the type is unsized). 4071 // Each such pointer is instrumented with a call to the runtime library. 4072 Type *OpType = Operand->getType(); 4073 // Check the operand value itself. 4074 insertShadowCheck(Operand, &I); 4075 if (!OpType->isPointerTy() || !isOutput) { 4076 assert(!isOutput); 4077 return; 4078 } 4079 Type *ElType = OpType->getPointerElementType(); 4080 if (!ElType->isSized()) 4081 return; 4082 int Size = DL.getTypeStoreSize(ElType); 4083 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 4084 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 4085 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 4086 } 4087 4088 /// Get the number of output arguments returned by pointers. 4089 int getNumOutputArgs(InlineAsm *IA, CallBase *CB) { 4090 int NumRetOutputs = 0; 4091 int NumOutputs = 0; 4092 Type *RetTy = cast<Value>(CB)->getType(); 4093 if (!RetTy->isVoidTy()) { 4094 // Register outputs are returned via the CallInst return value. 4095 auto *ST = dyn_cast<StructType>(RetTy); 4096 if (ST) 4097 NumRetOutputs = ST->getNumElements(); 4098 else 4099 NumRetOutputs = 1; 4100 } 4101 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 4102 for (const InlineAsm::ConstraintInfo &Info : Constraints) { 4103 switch (Info.Type) { 4104 case InlineAsm::isOutput: 4105 NumOutputs++; 4106 break; 4107 default: 4108 break; 4109 } 4110 } 4111 return NumOutputs - NumRetOutputs; 4112 } 4113 4114 void visitAsmInstruction(Instruction &I) { 4115 // Conservative inline assembly handling: check for poisoned shadow of 4116 // asm() arguments, then unpoison the result and all the memory locations 4117 // pointed to by those arguments. 4118 // An inline asm() statement in C++ contains lists of input and output 4119 // arguments used by the assembly code. These are mapped to operands of the 4120 // CallInst as follows: 4121 // - nR register outputs ("=r) are returned by value in a single structure 4122 // (SSA value of the CallInst); 4123 // - nO other outputs ("=m" and others) are returned by pointer as first 4124 // nO operands of the CallInst; 4125 // - nI inputs ("r", "m" and others) are passed to CallInst as the 4126 // remaining nI operands. 4127 // The total number of asm() arguments in the source is nR+nO+nI, and the 4128 // corresponding CallInst has nO+nI+1 operands (the last operand is the 4129 // function to be called). 4130 const DataLayout &DL = F.getParent()->getDataLayout(); 4131 CallBase *CB = cast<CallBase>(&I); 4132 IRBuilder<> IRB(&I); 4133 InlineAsm *IA = cast<InlineAsm>(CB->getCalledOperand()); 4134 int OutputArgs = getNumOutputArgs(IA, CB); 4135 // The last operand of a CallInst is the function itself. 4136 int NumOperands = CB->getNumOperands() - 1; 4137 4138 // Check input arguments. Doing so before unpoisoning output arguments, so 4139 // that we won't overwrite uninit values before checking them. 4140 for (int i = OutputArgs; i < NumOperands; i++) { 4141 Value *Operand = CB->getOperand(i); 4142 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 4143 } 4144 // Unpoison output arguments. This must happen before the actual InlineAsm 4145 // call, so that the shadow for memory published in the asm() statement 4146 // remains valid. 4147 for (int i = 0; i < OutputArgs; i++) { 4148 Value *Operand = CB->getOperand(i); 4149 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 4150 } 4151 4152 setShadow(&I, getCleanShadow(&I)); 4153 setOrigin(&I, getCleanOrigin()); 4154 } 4155 4156 void visitFreezeInst(FreezeInst &I) { 4157 // Freeze always returns a fully defined value. 4158 setShadow(&I, getCleanShadow(&I)); 4159 setOrigin(&I, getCleanOrigin()); 4160 } 4161 4162 void visitInstruction(Instruction &I) { 4163 // Everything else: stop propagating and check for poisoned shadow. 4164 if (ClDumpStrictInstructions) 4165 dumpInst(I); 4166 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 4167 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 4168 Value *Operand = I.getOperand(i); 4169 if (Operand->getType()->isSized()) 4170 insertShadowCheck(Operand, &I); 4171 } 4172 setShadow(&I, getCleanShadow(&I)); 4173 setOrigin(&I, getCleanOrigin()); 4174 } 4175 }; 4176 4177 /// AMD64-specific implementation of VarArgHelper. 4178 struct VarArgAMD64Helper : public VarArgHelper { 4179 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 4180 // See a comment in visitCallBase for more details. 4181 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 4182 static const unsigned AMD64FpEndOffsetSSE = 176; 4183 // If SSE is disabled, fp_offset in va_list is zero. 4184 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 4185 4186 unsigned AMD64FpEndOffset; 4187 Function &F; 4188 MemorySanitizer &MS; 4189 MemorySanitizerVisitor &MSV; 4190 Value *VAArgTLSCopy = nullptr; 4191 Value *VAArgTLSOriginCopy = nullptr; 4192 Value *VAArgOverflowSize = nullptr; 4193 4194 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4195 4196 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4197 4198 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 4199 MemorySanitizerVisitor &MSV) 4200 : F(F), MS(MS), MSV(MSV) { 4201 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 4202 for (const auto &Attr : F.getAttributes().getFnAttrs()) { 4203 if (Attr.isStringAttribute() && 4204 (Attr.getKindAsString() == "target-features")) { 4205 if (Attr.getValueAsString().contains("-sse")) 4206 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 4207 break; 4208 } 4209 } 4210 } 4211 4212 ArgKind classifyArgument(Value* arg) { 4213 // A very rough approximation of X86_64 argument classification rules. 4214 Type *T = arg->getType(); 4215 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 4216 return AK_FloatingPoint; 4217 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4218 return AK_GeneralPurpose; 4219 if (T->isPointerTy()) 4220 return AK_GeneralPurpose; 4221 return AK_Memory; 4222 } 4223 4224 // For VarArg functions, store the argument shadow in an ABI-specific format 4225 // that corresponds to va_list layout. 4226 // We do this because Clang lowers va_arg in the frontend, and this pass 4227 // only sees the low level code that deals with va_list internals. 4228 // A much easier alternative (provided that Clang emits va_arg instructions) 4229 // would have been to associate each live instance of va_list with a copy of 4230 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 4231 // order. 4232 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4233 unsigned GpOffset = 0; 4234 unsigned FpOffset = AMD64GpEndOffset; 4235 unsigned OverflowOffset = AMD64FpEndOffset; 4236 const DataLayout &DL = F.getParent()->getDataLayout(); 4237 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4238 ++ArgIt) { 4239 Value *A = *ArgIt; 4240 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4241 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4242 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4243 if (IsByVal) { 4244 // ByVal arguments always go to the overflow area. 4245 // Fixed arguments passed through the overflow area will be stepped 4246 // over by va_start, so don't count them towards the offset. 4247 if (IsFixed) 4248 continue; 4249 assert(A->getType()->isPointerTy()); 4250 Type *RealTy = CB.getParamByValType(ArgNo); 4251 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4252 Value *ShadowBase = getShadowPtrForVAArgument( 4253 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 4254 Value *OriginBase = nullptr; 4255 if (MS.TrackOrigins) 4256 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 4257 OverflowOffset += alignTo(ArgSize, 8); 4258 if (!ShadowBase) 4259 continue; 4260 Value *ShadowPtr, *OriginPtr; 4261 std::tie(ShadowPtr, OriginPtr) = 4262 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 4263 /*isStore*/ false); 4264 4265 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 4266 kShadowTLSAlignment, ArgSize); 4267 if (MS.TrackOrigins) 4268 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 4269 kShadowTLSAlignment, ArgSize); 4270 } else { 4271 ArgKind AK = classifyArgument(A); 4272 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 4273 AK = AK_Memory; 4274 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 4275 AK = AK_Memory; 4276 Value *ShadowBase, *OriginBase = nullptr; 4277 switch (AK) { 4278 case AK_GeneralPurpose: 4279 ShadowBase = 4280 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 4281 if (MS.TrackOrigins) 4282 OriginBase = 4283 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 4284 GpOffset += 8; 4285 break; 4286 case AK_FloatingPoint: 4287 ShadowBase = 4288 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 4289 if (MS.TrackOrigins) 4290 OriginBase = 4291 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 4292 FpOffset += 16; 4293 break; 4294 case AK_Memory: 4295 if (IsFixed) 4296 continue; 4297 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4298 ShadowBase = 4299 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 4300 if (MS.TrackOrigins) 4301 OriginBase = 4302 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 4303 OverflowOffset += alignTo(ArgSize, 8); 4304 } 4305 // Take fixed arguments into account for GpOffset and FpOffset, 4306 // but don't actually store shadows for them. 4307 // TODO(glider): don't call get*PtrForVAArgument() for them. 4308 if (IsFixed) 4309 continue; 4310 if (!ShadowBase) 4311 continue; 4312 Value *Shadow = MSV.getShadow(A); 4313 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 4314 if (MS.TrackOrigins) { 4315 Value *Origin = MSV.getOrigin(A); 4316 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 4317 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 4318 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 4319 } 4320 } 4321 } 4322 Constant *OverflowSize = 4323 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 4324 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4325 } 4326 4327 /// Compute the shadow address for a given va_arg. 4328 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4329 unsigned ArgOffset, unsigned ArgSize) { 4330 // Make sure we don't overflow __msan_va_arg_tls. 4331 if (ArgOffset + ArgSize > kParamTLSSize) 4332 return nullptr; 4333 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4334 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4335 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4336 "_msarg_va_s"); 4337 } 4338 4339 /// Compute the origin address for a given va_arg. 4340 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 4341 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 4342 // getOriginPtrForVAArgument() is always called after 4343 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 4344 // overflow. 4345 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4346 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 4347 "_msarg_va_o"); 4348 } 4349 4350 void unpoisonVAListTagForInst(IntrinsicInst &I) { 4351 IRBuilder<> IRB(&I); 4352 Value *VAListTag = I.getArgOperand(0); 4353 Value *ShadowPtr, *OriginPtr; 4354 const Align Alignment = Align(8); 4355 std::tie(ShadowPtr, OriginPtr) = 4356 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 4357 /*isStore*/ true); 4358 4359 // Unpoison the whole __va_list_tag. 4360 // FIXME: magic ABI constants. 4361 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4362 /* size */ 24, Alignment, false); 4363 // We shouldn't need to zero out the origins, as they're only checked for 4364 // nonzero shadow. 4365 } 4366 4367 void visitVAStartInst(VAStartInst &I) override { 4368 if (F.getCallingConv() == CallingConv::Win64) 4369 return; 4370 VAStartInstrumentationList.push_back(&I); 4371 unpoisonVAListTagForInst(I); 4372 } 4373 4374 void visitVACopyInst(VACopyInst &I) override { 4375 if (F.getCallingConv() == CallingConv::Win64) return; 4376 unpoisonVAListTagForInst(I); 4377 } 4378 4379 void finalizeInstrumentation() override { 4380 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4381 "finalizeInstrumentation called twice"); 4382 if (!VAStartInstrumentationList.empty()) { 4383 // If there is a va_start in this function, make a backup copy of 4384 // va_arg_tls somewhere in the function entry block. 4385 IRBuilder<> IRB(MSV.FnPrologueEnd); 4386 VAArgOverflowSize = 4387 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4388 Value *CopySize = 4389 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 4390 VAArgOverflowSize); 4391 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4392 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4393 if (MS.TrackOrigins) { 4394 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4395 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4396 Align(8), CopySize); 4397 } 4398 } 4399 4400 // Instrument va_start. 4401 // Copy va_list shadow from the backup copy of the TLS contents. 4402 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4403 CallInst *OrigInst = VAStartInstrumentationList[i]; 4404 IRBuilder<> IRB(OrigInst->getNextNode()); 4405 Value *VAListTag = OrigInst->getArgOperand(0); 4406 4407 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4408 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4409 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4410 ConstantInt::get(MS.IntptrTy, 16)), 4411 PointerType::get(RegSaveAreaPtrTy, 0)); 4412 Value *RegSaveAreaPtr = 4413 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4414 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4415 const Align Alignment = Align(16); 4416 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4417 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4418 Alignment, /*isStore*/ true); 4419 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4420 AMD64FpEndOffset); 4421 if (MS.TrackOrigins) 4422 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4423 Alignment, AMD64FpEndOffset); 4424 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4425 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4426 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4427 ConstantInt::get(MS.IntptrTy, 8)), 4428 PointerType::get(OverflowArgAreaPtrTy, 0)); 4429 Value *OverflowArgAreaPtr = 4430 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4431 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4432 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4433 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4434 Alignment, /*isStore*/ true); 4435 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4436 AMD64FpEndOffset); 4437 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4438 VAArgOverflowSize); 4439 if (MS.TrackOrigins) { 4440 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4441 AMD64FpEndOffset); 4442 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4443 VAArgOverflowSize); 4444 } 4445 } 4446 } 4447 }; 4448 4449 /// MIPS64-specific implementation of VarArgHelper. 4450 struct VarArgMIPS64Helper : public VarArgHelper { 4451 Function &F; 4452 MemorySanitizer &MS; 4453 MemorySanitizerVisitor &MSV; 4454 Value *VAArgTLSCopy = nullptr; 4455 Value *VAArgSize = nullptr; 4456 4457 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4458 4459 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 4460 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4461 4462 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4463 unsigned VAArgOffset = 0; 4464 const DataLayout &DL = F.getParent()->getDataLayout(); 4465 for (auto ArgIt = CB.arg_begin() + CB.getFunctionType()->getNumParams(), 4466 End = CB.arg_end(); 4467 ArgIt != End; ++ArgIt) { 4468 Triple TargetTriple(F.getParent()->getTargetTriple()); 4469 Value *A = *ArgIt; 4470 Value *Base; 4471 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4472 if (TargetTriple.getArch() == Triple::mips64) { 4473 // Adjusting the shadow for argument with size < 8 to match the placement 4474 // of bits in big endian system 4475 if (ArgSize < 8) 4476 VAArgOffset += (8 - ArgSize); 4477 } 4478 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 4479 VAArgOffset += ArgSize; 4480 VAArgOffset = alignTo(VAArgOffset, 8); 4481 if (!Base) 4482 continue; 4483 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4484 } 4485 4486 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 4487 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4488 // a new class member i.e. it is the total size of all VarArgs. 4489 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4490 } 4491 4492 /// Compute the shadow address for a given va_arg. 4493 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4494 unsigned ArgOffset, unsigned ArgSize) { 4495 // Make sure we don't overflow __msan_va_arg_tls. 4496 if (ArgOffset + ArgSize > kParamTLSSize) 4497 return nullptr; 4498 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4499 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4500 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4501 "_msarg"); 4502 } 4503 4504 void visitVAStartInst(VAStartInst &I) override { 4505 IRBuilder<> IRB(&I); 4506 VAStartInstrumentationList.push_back(&I); 4507 Value *VAListTag = I.getArgOperand(0); 4508 Value *ShadowPtr, *OriginPtr; 4509 const Align Alignment = Align(8); 4510 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4511 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4512 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4513 /* size */ 8, Alignment, false); 4514 } 4515 4516 void visitVACopyInst(VACopyInst &I) override { 4517 IRBuilder<> IRB(&I); 4518 VAStartInstrumentationList.push_back(&I); 4519 Value *VAListTag = I.getArgOperand(0); 4520 Value *ShadowPtr, *OriginPtr; 4521 const Align Alignment = Align(8); 4522 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4523 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4524 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4525 /* size */ 8, Alignment, false); 4526 } 4527 4528 void finalizeInstrumentation() override { 4529 assert(!VAArgSize && !VAArgTLSCopy && 4530 "finalizeInstrumentation called twice"); 4531 IRBuilder<> IRB(MSV.FnPrologueEnd); 4532 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4533 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4534 VAArgSize); 4535 4536 if (!VAStartInstrumentationList.empty()) { 4537 // If there is a va_start in this function, make a backup copy of 4538 // va_arg_tls somewhere in the function entry block. 4539 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4540 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4541 } 4542 4543 // Instrument va_start. 4544 // Copy va_list shadow from the backup copy of the TLS contents. 4545 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4546 CallInst *OrigInst = VAStartInstrumentationList[i]; 4547 IRBuilder<> IRB(OrigInst->getNextNode()); 4548 Value *VAListTag = OrigInst->getArgOperand(0); 4549 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4550 Value *RegSaveAreaPtrPtr = 4551 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4552 PointerType::get(RegSaveAreaPtrTy, 0)); 4553 Value *RegSaveAreaPtr = 4554 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4555 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4556 const Align Alignment = Align(8); 4557 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4558 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4559 Alignment, /*isStore*/ true); 4560 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4561 CopySize); 4562 } 4563 } 4564 }; 4565 4566 /// AArch64-specific implementation of VarArgHelper. 4567 struct VarArgAArch64Helper : public VarArgHelper { 4568 static const unsigned kAArch64GrArgSize = 64; 4569 static const unsigned kAArch64VrArgSize = 128; 4570 4571 static const unsigned AArch64GrBegOffset = 0; 4572 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4573 // Make VR space aligned to 16 bytes. 4574 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4575 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4576 + kAArch64VrArgSize; 4577 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4578 4579 Function &F; 4580 MemorySanitizer &MS; 4581 MemorySanitizerVisitor &MSV; 4582 Value *VAArgTLSCopy = nullptr; 4583 Value *VAArgOverflowSize = nullptr; 4584 4585 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4586 4587 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4588 4589 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4590 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4591 4592 ArgKind classifyArgument(Value* arg) { 4593 Type *T = arg->getType(); 4594 if (T->isFPOrFPVectorTy()) 4595 return AK_FloatingPoint; 4596 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4597 || (T->isPointerTy())) 4598 return AK_GeneralPurpose; 4599 return AK_Memory; 4600 } 4601 4602 // The instrumentation stores the argument shadow in a non ABI-specific 4603 // format because it does not know which argument is named (since Clang, 4604 // like x86_64 case, lowers the va_args in the frontend and this pass only 4605 // sees the low level code that deals with va_list internals). 4606 // The first seven GR registers are saved in the first 56 bytes of the 4607 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4608 // the remaining arguments. 4609 // Using constant offset within the va_arg TLS array allows fast copy 4610 // in the finalize instrumentation. 4611 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4612 unsigned GrOffset = AArch64GrBegOffset; 4613 unsigned VrOffset = AArch64VrBegOffset; 4614 unsigned OverflowOffset = AArch64VAEndOffset; 4615 4616 const DataLayout &DL = F.getParent()->getDataLayout(); 4617 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4618 ++ArgIt) { 4619 Value *A = *ArgIt; 4620 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4621 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4622 ArgKind AK = classifyArgument(A); 4623 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4624 AK = AK_Memory; 4625 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4626 AK = AK_Memory; 4627 Value *Base; 4628 switch (AK) { 4629 case AK_GeneralPurpose: 4630 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4631 GrOffset += 8; 4632 break; 4633 case AK_FloatingPoint: 4634 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4635 VrOffset += 16; 4636 break; 4637 case AK_Memory: 4638 // Don't count fixed arguments in the overflow area - va_start will 4639 // skip right over them. 4640 if (IsFixed) 4641 continue; 4642 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4643 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4644 alignTo(ArgSize, 8)); 4645 OverflowOffset += alignTo(ArgSize, 8); 4646 break; 4647 } 4648 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4649 // bother to actually store a shadow. 4650 if (IsFixed) 4651 continue; 4652 if (!Base) 4653 continue; 4654 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4655 } 4656 Constant *OverflowSize = 4657 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4658 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4659 } 4660 4661 /// Compute the shadow address for a given va_arg. 4662 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4663 unsigned ArgOffset, unsigned ArgSize) { 4664 // Make sure we don't overflow __msan_va_arg_tls. 4665 if (ArgOffset + ArgSize > kParamTLSSize) 4666 return nullptr; 4667 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4668 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4669 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4670 "_msarg"); 4671 } 4672 4673 void visitVAStartInst(VAStartInst &I) override { 4674 IRBuilder<> IRB(&I); 4675 VAStartInstrumentationList.push_back(&I); 4676 Value *VAListTag = I.getArgOperand(0); 4677 Value *ShadowPtr, *OriginPtr; 4678 const Align Alignment = Align(8); 4679 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4680 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4681 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4682 /* size */ 32, Alignment, false); 4683 } 4684 4685 void visitVACopyInst(VACopyInst &I) override { 4686 IRBuilder<> IRB(&I); 4687 VAStartInstrumentationList.push_back(&I); 4688 Value *VAListTag = I.getArgOperand(0); 4689 Value *ShadowPtr, *OriginPtr; 4690 const Align Alignment = Align(8); 4691 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4692 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4693 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4694 /* size */ 32, Alignment, false); 4695 } 4696 4697 // Retrieve a va_list field of 'void*' size. 4698 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4699 Value *SaveAreaPtrPtr = 4700 IRB.CreateIntToPtr( 4701 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4702 ConstantInt::get(MS.IntptrTy, offset)), 4703 Type::getInt64PtrTy(*MS.C)); 4704 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4705 } 4706 4707 // Retrieve a va_list field of 'int' size. 4708 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4709 Value *SaveAreaPtr = 4710 IRB.CreateIntToPtr( 4711 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4712 ConstantInt::get(MS.IntptrTy, offset)), 4713 Type::getInt32PtrTy(*MS.C)); 4714 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4715 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4716 } 4717 4718 void finalizeInstrumentation() override { 4719 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4720 "finalizeInstrumentation called twice"); 4721 if (!VAStartInstrumentationList.empty()) { 4722 // If there is a va_start in this function, make a backup copy of 4723 // va_arg_tls somewhere in the function entry block. 4724 IRBuilder<> IRB(MSV.FnPrologueEnd); 4725 VAArgOverflowSize = 4726 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4727 Value *CopySize = 4728 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4729 VAArgOverflowSize); 4730 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4731 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4732 } 4733 4734 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4735 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4736 4737 // Instrument va_start, copy va_list shadow from the backup copy of 4738 // the TLS contents. 4739 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4740 CallInst *OrigInst = VAStartInstrumentationList[i]; 4741 IRBuilder<> IRB(OrigInst->getNextNode()); 4742 4743 Value *VAListTag = OrigInst->getArgOperand(0); 4744 4745 // The variadic ABI for AArch64 creates two areas to save the incoming 4746 // argument registers (one for 64-bit general register xn-x7 and another 4747 // for 128-bit FP/SIMD vn-v7). 4748 // We need then to propagate the shadow arguments on both regions 4749 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4750 // The remaining arguments are saved on shadow for 'va::stack'. 4751 // One caveat is it requires only to propagate the non-named arguments, 4752 // however on the call site instrumentation 'all' the arguments are 4753 // saved. So to copy the shadow values from the va_arg TLS array 4754 // we need to adjust the offset for both GR and VR fields based on 4755 // the __{gr,vr}_offs value (since they are stores based on incoming 4756 // named arguments). 4757 4758 // Read the stack pointer from the va_list. 4759 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4760 4761 // Read both the __gr_top and __gr_off and add them up. 4762 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4763 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4764 4765 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4766 4767 // Read both the __vr_top and __vr_off and add them up. 4768 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4769 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4770 4771 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4772 4773 // It does not know how many named arguments is being used and, on the 4774 // callsite all the arguments were saved. Since __gr_off is defined as 4775 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4776 // argument by ignoring the bytes of shadow from named arguments. 4777 Value *GrRegSaveAreaShadowPtrOff = 4778 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4779 4780 Value *GrRegSaveAreaShadowPtr = 4781 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4782 Align(8), /*isStore*/ true) 4783 .first; 4784 4785 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4786 GrRegSaveAreaShadowPtrOff); 4787 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4788 4789 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, Align(8), GrSrcPtr, Align(8), 4790 GrCopySize); 4791 4792 // Again, but for FP/SIMD values. 4793 Value *VrRegSaveAreaShadowPtrOff = 4794 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4795 4796 Value *VrRegSaveAreaShadowPtr = 4797 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4798 Align(8), /*isStore*/ true) 4799 .first; 4800 4801 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4802 IRB.getInt8Ty(), 4803 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4804 IRB.getInt32(AArch64VrBegOffset)), 4805 VrRegSaveAreaShadowPtrOff); 4806 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4807 4808 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, Align(8), VrSrcPtr, Align(8), 4809 VrCopySize); 4810 4811 // And finally for remaining arguments. 4812 Value *StackSaveAreaShadowPtr = 4813 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4814 Align(16), /*isStore*/ true) 4815 .first; 4816 4817 Value *StackSrcPtr = 4818 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4819 IRB.getInt32(AArch64VAEndOffset)); 4820 4821 IRB.CreateMemCpy(StackSaveAreaShadowPtr, Align(16), StackSrcPtr, 4822 Align(16), VAArgOverflowSize); 4823 } 4824 } 4825 }; 4826 4827 /// PowerPC64-specific implementation of VarArgHelper. 4828 struct VarArgPowerPC64Helper : public VarArgHelper { 4829 Function &F; 4830 MemorySanitizer &MS; 4831 MemorySanitizerVisitor &MSV; 4832 Value *VAArgTLSCopy = nullptr; 4833 Value *VAArgSize = nullptr; 4834 4835 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4836 4837 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4838 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4839 4840 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4841 // For PowerPC, we need to deal with alignment of stack arguments - 4842 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4843 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4844 // For that reason, we compute current offset from stack pointer (which is 4845 // always properly aligned), and offset for the first vararg, then subtract 4846 // them. 4847 unsigned VAArgBase; 4848 Triple TargetTriple(F.getParent()->getTargetTriple()); 4849 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4850 // and 32 bytes for ABIv2. This is usually determined by target 4851 // endianness, but in theory could be overridden by function attribute. 4852 if (TargetTriple.getArch() == Triple::ppc64) 4853 VAArgBase = 48; 4854 else 4855 VAArgBase = 32; 4856 unsigned VAArgOffset = VAArgBase; 4857 const DataLayout &DL = F.getParent()->getDataLayout(); 4858 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4859 ++ArgIt) { 4860 Value *A = *ArgIt; 4861 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4862 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4863 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4864 if (IsByVal) { 4865 assert(A->getType()->isPointerTy()); 4866 Type *RealTy = CB.getParamByValType(ArgNo); 4867 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4868 MaybeAlign ArgAlign = CB.getParamAlign(ArgNo); 4869 if (!ArgAlign || *ArgAlign < Align(8)) 4870 ArgAlign = Align(8); 4871 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4872 if (!IsFixed) { 4873 Value *Base = getShadowPtrForVAArgument( 4874 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4875 if (Base) { 4876 Value *AShadowPtr, *AOriginPtr; 4877 std::tie(AShadowPtr, AOriginPtr) = 4878 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4879 kShadowTLSAlignment, /*isStore*/ false); 4880 4881 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4882 kShadowTLSAlignment, ArgSize); 4883 } 4884 } 4885 VAArgOffset += alignTo(ArgSize, 8); 4886 } else { 4887 Value *Base; 4888 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4889 uint64_t ArgAlign = 8; 4890 if (A->getType()->isArrayTy()) { 4891 // Arrays are aligned to element size, except for long double 4892 // arrays, which are aligned to 8 bytes. 4893 Type *ElementTy = A->getType()->getArrayElementType(); 4894 if (!ElementTy->isPPC_FP128Ty()) 4895 ArgAlign = DL.getTypeAllocSize(ElementTy); 4896 } else if (A->getType()->isVectorTy()) { 4897 // Vectors are naturally aligned. 4898 ArgAlign = DL.getTypeAllocSize(A->getType()); 4899 } 4900 if (ArgAlign < 8) 4901 ArgAlign = 8; 4902 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4903 if (DL.isBigEndian()) { 4904 // Adjusting the shadow for argument with size < 8 to match the placement 4905 // of bits in big endian system 4906 if (ArgSize < 8) 4907 VAArgOffset += (8 - ArgSize); 4908 } 4909 if (!IsFixed) { 4910 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4911 VAArgOffset - VAArgBase, ArgSize); 4912 if (Base) 4913 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4914 } 4915 VAArgOffset += ArgSize; 4916 VAArgOffset = alignTo(VAArgOffset, 8); 4917 } 4918 if (IsFixed) 4919 VAArgBase = VAArgOffset; 4920 } 4921 4922 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4923 VAArgOffset - VAArgBase); 4924 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4925 // a new class member i.e. it is the total size of all VarArgs. 4926 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4927 } 4928 4929 /// Compute the shadow address for a given va_arg. 4930 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4931 unsigned ArgOffset, unsigned ArgSize) { 4932 // Make sure we don't overflow __msan_va_arg_tls. 4933 if (ArgOffset + ArgSize > kParamTLSSize) 4934 return nullptr; 4935 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4936 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4937 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4938 "_msarg"); 4939 } 4940 4941 void visitVAStartInst(VAStartInst &I) override { 4942 IRBuilder<> IRB(&I); 4943 VAStartInstrumentationList.push_back(&I); 4944 Value *VAListTag = I.getArgOperand(0); 4945 Value *ShadowPtr, *OriginPtr; 4946 const Align Alignment = Align(8); 4947 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4948 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4949 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4950 /* size */ 8, Alignment, false); 4951 } 4952 4953 void visitVACopyInst(VACopyInst &I) override { 4954 IRBuilder<> IRB(&I); 4955 Value *VAListTag = I.getArgOperand(0); 4956 Value *ShadowPtr, *OriginPtr; 4957 const Align Alignment = Align(8); 4958 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4959 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4960 // Unpoison the whole __va_list_tag. 4961 // FIXME: magic ABI constants. 4962 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4963 /* size */ 8, Alignment, false); 4964 } 4965 4966 void finalizeInstrumentation() override { 4967 assert(!VAArgSize && !VAArgTLSCopy && 4968 "finalizeInstrumentation called twice"); 4969 IRBuilder<> IRB(MSV.FnPrologueEnd); 4970 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4971 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4972 VAArgSize); 4973 4974 if (!VAStartInstrumentationList.empty()) { 4975 // If there is a va_start in this function, make a backup copy of 4976 // va_arg_tls somewhere in the function entry block. 4977 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4978 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4979 } 4980 4981 // Instrument va_start. 4982 // Copy va_list shadow from the backup copy of the TLS contents. 4983 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4984 CallInst *OrigInst = VAStartInstrumentationList[i]; 4985 IRBuilder<> IRB(OrigInst->getNextNode()); 4986 Value *VAListTag = OrigInst->getArgOperand(0); 4987 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4988 Value *RegSaveAreaPtrPtr = 4989 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4990 PointerType::get(RegSaveAreaPtrTy, 0)); 4991 Value *RegSaveAreaPtr = 4992 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4993 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4994 const Align Alignment = Align(8); 4995 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4996 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4997 Alignment, /*isStore*/ true); 4998 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4999 CopySize); 5000 } 5001 } 5002 }; 5003 5004 /// SystemZ-specific implementation of VarArgHelper. 5005 struct VarArgSystemZHelper : public VarArgHelper { 5006 static const unsigned SystemZGpOffset = 16; 5007 static const unsigned SystemZGpEndOffset = 56; 5008 static const unsigned SystemZFpOffset = 128; 5009 static const unsigned SystemZFpEndOffset = 160; 5010 static const unsigned SystemZMaxVrArgs = 8; 5011 static const unsigned SystemZRegSaveAreaSize = 160; 5012 static const unsigned SystemZOverflowOffset = 160; 5013 static const unsigned SystemZVAListTagSize = 32; 5014 static const unsigned SystemZOverflowArgAreaPtrOffset = 16; 5015 static const unsigned SystemZRegSaveAreaPtrOffset = 24; 5016 5017 Function &F; 5018 MemorySanitizer &MS; 5019 MemorySanitizerVisitor &MSV; 5020 Value *VAArgTLSCopy = nullptr; 5021 Value *VAArgTLSOriginCopy = nullptr; 5022 Value *VAArgOverflowSize = nullptr; 5023 5024 SmallVector<CallInst *, 16> VAStartInstrumentationList; 5025 5026 enum class ArgKind { 5027 GeneralPurpose, 5028 FloatingPoint, 5029 Vector, 5030 Memory, 5031 Indirect, 5032 }; 5033 5034 enum class ShadowExtension { None, Zero, Sign }; 5035 5036 VarArgSystemZHelper(Function &F, MemorySanitizer &MS, 5037 MemorySanitizerVisitor &MSV) 5038 : F(F), MS(MS), MSV(MSV) {} 5039 5040 ArgKind classifyArgument(Type *T, bool IsSoftFloatABI) { 5041 // T is a SystemZABIInfo::classifyArgumentType() output, and there are 5042 // only a few possibilities of what it can be. In particular, enums, single 5043 // element structs and large types have already been taken care of. 5044 5045 // Some i128 and fp128 arguments are converted to pointers only in the 5046 // back end. 5047 if (T->isIntegerTy(128) || T->isFP128Ty()) 5048 return ArgKind::Indirect; 5049 if (T->isFloatingPointTy()) 5050 return IsSoftFloatABI ? ArgKind::GeneralPurpose : ArgKind::FloatingPoint; 5051 if (T->isIntegerTy() || T->isPointerTy()) 5052 return ArgKind::GeneralPurpose; 5053 if (T->isVectorTy()) 5054 return ArgKind::Vector; 5055 return ArgKind::Memory; 5056 } 5057 5058 ShadowExtension getShadowExtension(const CallBase &CB, unsigned ArgNo) { 5059 // ABI says: "One of the simple integer types no more than 64 bits wide. 5060 // ... If such an argument is shorter than 64 bits, replace it by a full 5061 // 64-bit integer representing the same number, using sign or zero 5062 // extension". Shadow for an integer argument has the same type as the 5063 // argument itself, so it can be sign or zero extended as well. 5064 bool ZExt = CB.paramHasAttr(ArgNo, Attribute::ZExt); 5065 bool SExt = CB.paramHasAttr(ArgNo, Attribute::SExt); 5066 if (ZExt) { 5067 assert(!SExt); 5068 return ShadowExtension::Zero; 5069 } 5070 if (SExt) { 5071 assert(!ZExt); 5072 return ShadowExtension::Sign; 5073 } 5074 return ShadowExtension::None; 5075 } 5076 5077 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 5078 bool IsSoftFloatABI = CB.getCalledFunction() 5079 ->getFnAttribute("use-soft-float") 5080 .getValueAsBool(); 5081 unsigned GpOffset = SystemZGpOffset; 5082 unsigned FpOffset = SystemZFpOffset; 5083 unsigned VrIndex = 0; 5084 unsigned OverflowOffset = SystemZOverflowOffset; 5085 const DataLayout &DL = F.getParent()->getDataLayout(); 5086 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 5087 ++ArgIt) { 5088 Value *A = *ArgIt; 5089 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 5090 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 5091 // SystemZABIInfo does not produce ByVal parameters. 5092 assert(!CB.paramHasAttr(ArgNo, Attribute::ByVal)); 5093 Type *T = A->getType(); 5094 ArgKind AK = classifyArgument(T, IsSoftFloatABI); 5095 if (AK == ArgKind::Indirect) { 5096 T = PointerType::get(T, 0); 5097 AK = ArgKind::GeneralPurpose; 5098 } 5099 if (AK == ArgKind::GeneralPurpose && GpOffset >= SystemZGpEndOffset) 5100 AK = ArgKind::Memory; 5101 if (AK == ArgKind::FloatingPoint && FpOffset >= SystemZFpEndOffset) 5102 AK = ArgKind::Memory; 5103 if (AK == ArgKind::Vector && (VrIndex >= SystemZMaxVrArgs || !IsFixed)) 5104 AK = ArgKind::Memory; 5105 Value *ShadowBase = nullptr; 5106 Value *OriginBase = nullptr; 5107 ShadowExtension SE = ShadowExtension::None; 5108 switch (AK) { 5109 case ArgKind::GeneralPurpose: { 5110 // Always keep track of GpOffset, but store shadow only for varargs. 5111 uint64_t ArgSize = 8; 5112 if (GpOffset + ArgSize <= kParamTLSSize) { 5113 if (!IsFixed) { 5114 SE = getShadowExtension(CB, ArgNo); 5115 uint64_t GapSize = 0; 5116 if (SE == ShadowExtension::None) { 5117 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5118 assert(ArgAllocSize <= ArgSize); 5119 GapSize = ArgSize - ArgAllocSize; 5120 } 5121 ShadowBase = getShadowAddrForVAArgument(IRB, GpOffset + GapSize); 5122 if (MS.TrackOrigins) 5123 OriginBase = getOriginPtrForVAArgument(IRB, GpOffset + GapSize); 5124 } 5125 GpOffset += ArgSize; 5126 } else { 5127 GpOffset = kParamTLSSize; 5128 } 5129 break; 5130 } 5131 case ArgKind::FloatingPoint: { 5132 // Always keep track of FpOffset, but store shadow only for varargs. 5133 uint64_t ArgSize = 8; 5134 if (FpOffset + ArgSize <= kParamTLSSize) { 5135 if (!IsFixed) { 5136 // PoP says: "A short floating-point datum requires only the 5137 // left-most 32 bit positions of a floating-point register". 5138 // Therefore, in contrast to AK_GeneralPurpose and AK_Memory, 5139 // don't extend shadow and don't mind the gap. 5140 ShadowBase = getShadowAddrForVAArgument(IRB, FpOffset); 5141 if (MS.TrackOrigins) 5142 OriginBase = getOriginPtrForVAArgument(IRB, FpOffset); 5143 } 5144 FpOffset += ArgSize; 5145 } else { 5146 FpOffset = kParamTLSSize; 5147 } 5148 break; 5149 } 5150 case ArgKind::Vector: { 5151 // Keep track of VrIndex. No need to store shadow, since vector varargs 5152 // go through AK_Memory. 5153 assert(IsFixed); 5154 VrIndex++; 5155 break; 5156 } 5157 case ArgKind::Memory: { 5158 // Keep track of OverflowOffset and store shadow only for varargs. 5159 // Ignore fixed args, since we need to copy only the vararg portion of 5160 // the overflow area shadow. 5161 if (!IsFixed) { 5162 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5163 uint64_t ArgSize = alignTo(ArgAllocSize, 8); 5164 if (OverflowOffset + ArgSize <= kParamTLSSize) { 5165 SE = getShadowExtension(CB, ArgNo); 5166 uint64_t GapSize = 5167 SE == ShadowExtension::None ? ArgSize - ArgAllocSize : 0; 5168 ShadowBase = 5169 getShadowAddrForVAArgument(IRB, OverflowOffset + GapSize); 5170 if (MS.TrackOrigins) 5171 OriginBase = 5172 getOriginPtrForVAArgument(IRB, OverflowOffset + GapSize); 5173 OverflowOffset += ArgSize; 5174 } else { 5175 OverflowOffset = kParamTLSSize; 5176 } 5177 } 5178 break; 5179 } 5180 case ArgKind::Indirect: 5181 llvm_unreachable("Indirect must be converted to GeneralPurpose"); 5182 } 5183 if (ShadowBase == nullptr) 5184 continue; 5185 Value *Shadow = MSV.getShadow(A); 5186 if (SE != ShadowExtension::None) 5187 Shadow = MSV.CreateShadowCast(IRB, Shadow, IRB.getInt64Ty(), 5188 /*Signed*/ SE == ShadowExtension::Sign); 5189 ShadowBase = IRB.CreateIntToPtr( 5190 ShadowBase, PointerType::get(Shadow->getType(), 0), "_msarg_va_s"); 5191 IRB.CreateStore(Shadow, ShadowBase); 5192 if (MS.TrackOrigins) { 5193 Value *Origin = MSV.getOrigin(A); 5194 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 5195 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 5196 kMinOriginAlignment); 5197 } 5198 } 5199 Constant *OverflowSize = ConstantInt::get( 5200 IRB.getInt64Ty(), OverflowOffset - SystemZOverflowOffset); 5201 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 5202 } 5203 5204 Value *getShadowAddrForVAArgument(IRBuilder<> &IRB, unsigned ArgOffset) { 5205 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 5206 return IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5207 } 5208 5209 Value *getOriginPtrForVAArgument(IRBuilder<> &IRB, int ArgOffset) { 5210 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 5211 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5212 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 5213 "_msarg_va_o"); 5214 } 5215 5216 void unpoisonVAListTagForInst(IntrinsicInst &I) { 5217 IRBuilder<> IRB(&I); 5218 Value *VAListTag = I.getArgOperand(0); 5219 Value *ShadowPtr, *OriginPtr; 5220 const Align Alignment = Align(8); 5221 std::tie(ShadowPtr, OriginPtr) = 5222 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 5223 /*isStore*/ true); 5224 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 5225 SystemZVAListTagSize, Alignment, false); 5226 } 5227 5228 void visitVAStartInst(VAStartInst &I) override { 5229 VAStartInstrumentationList.push_back(&I); 5230 unpoisonVAListTagForInst(I); 5231 } 5232 5233 void visitVACopyInst(VACopyInst &I) override { unpoisonVAListTagForInst(I); } 5234 5235 void copyRegSaveArea(IRBuilder<> &IRB, Value *VAListTag) { 5236 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5237 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 5238 IRB.CreateAdd( 5239 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5240 ConstantInt::get(MS.IntptrTy, SystemZRegSaveAreaPtrOffset)), 5241 PointerType::get(RegSaveAreaPtrTy, 0)); 5242 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 5243 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 5244 const Align Alignment = Align(8); 5245 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 5246 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), Alignment, 5247 /*isStore*/ true); 5248 // TODO(iii): copy only fragments filled by visitCallBase() 5249 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 5250 SystemZRegSaveAreaSize); 5251 if (MS.TrackOrigins) 5252 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 5253 Alignment, SystemZRegSaveAreaSize); 5254 } 5255 5256 void copyOverflowArea(IRBuilder<> &IRB, Value *VAListTag) { 5257 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5258 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 5259 IRB.CreateAdd( 5260 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5261 ConstantInt::get(MS.IntptrTy, SystemZOverflowArgAreaPtrOffset)), 5262 PointerType::get(OverflowArgAreaPtrTy, 0)); 5263 Value *OverflowArgAreaPtr = 5264 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 5265 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 5266 const Align Alignment = Align(8); 5267 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 5268 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 5269 Alignment, /*isStore*/ true); 5270 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 5271 SystemZOverflowOffset); 5272 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 5273 VAArgOverflowSize); 5274 if (MS.TrackOrigins) { 5275 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 5276 SystemZOverflowOffset); 5277 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 5278 VAArgOverflowSize); 5279 } 5280 } 5281 5282 void finalizeInstrumentation() override { 5283 assert(!VAArgOverflowSize && !VAArgTLSCopy && 5284 "finalizeInstrumentation called twice"); 5285 if (!VAStartInstrumentationList.empty()) { 5286 // If there is a va_start in this function, make a backup copy of 5287 // va_arg_tls somewhere in the function entry block. 5288 IRBuilder<> IRB(MSV.FnPrologueEnd); 5289 VAArgOverflowSize = 5290 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 5291 Value *CopySize = 5292 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, SystemZOverflowOffset), 5293 VAArgOverflowSize); 5294 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5295 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 5296 if (MS.TrackOrigins) { 5297 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5298 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 5299 Align(8), CopySize); 5300 } 5301 } 5302 5303 // Instrument va_start. 5304 // Copy va_list shadow from the backup copy of the TLS contents. 5305 for (size_t VaStartNo = 0, VaStartNum = VAStartInstrumentationList.size(); 5306 VaStartNo < VaStartNum; VaStartNo++) { 5307 CallInst *OrigInst = VAStartInstrumentationList[VaStartNo]; 5308 IRBuilder<> IRB(OrigInst->getNextNode()); 5309 Value *VAListTag = OrigInst->getArgOperand(0); 5310 copyRegSaveArea(IRB, VAListTag); 5311 copyOverflowArea(IRB, VAListTag); 5312 } 5313 } 5314 }; 5315 5316 /// A no-op implementation of VarArgHelper. 5317 struct VarArgNoOpHelper : public VarArgHelper { 5318 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 5319 MemorySanitizerVisitor &MSV) {} 5320 5321 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override {} 5322 5323 void visitVAStartInst(VAStartInst &I) override {} 5324 5325 void visitVACopyInst(VACopyInst &I) override {} 5326 5327 void finalizeInstrumentation() override {} 5328 }; 5329 5330 } // end anonymous namespace 5331 5332 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 5333 MemorySanitizerVisitor &Visitor) { 5334 // VarArg handling is only implemented on AMD64. False positives are possible 5335 // on other platforms. 5336 Triple TargetTriple(Func.getParent()->getTargetTriple()); 5337 if (TargetTriple.getArch() == Triple::x86_64) 5338 return new VarArgAMD64Helper(Func, Msan, Visitor); 5339 else if (TargetTriple.isMIPS64()) 5340 return new VarArgMIPS64Helper(Func, Msan, Visitor); 5341 else if (TargetTriple.getArch() == Triple::aarch64) 5342 return new VarArgAArch64Helper(Func, Msan, Visitor); 5343 else if (TargetTriple.getArch() == Triple::ppc64 || 5344 TargetTriple.getArch() == Triple::ppc64le) 5345 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 5346 else if (TargetTriple.getArch() == Triple::systemz) 5347 return new VarArgSystemZHelper(Func, Msan, Visitor); 5348 else 5349 return new VarArgNoOpHelper(Func, Msan, Visitor); 5350 } 5351 5352 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 5353 if (!CompileKernel && F.getName() == kMsanModuleCtorName) 5354 return false; 5355 5356 if (F.hasFnAttribute(Attribute::DisableSanitizerInstrumentation)) 5357 return false; 5358 5359 MemorySanitizerVisitor Visitor(F, *this, TLI); 5360 5361 // Clear out readonly/readnone attributes. 5362 AttrBuilder B; 5363 B.addAttribute(Attribute::ReadOnly) 5364 .addAttribute(Attribute::ReadNone) 5365 .addAttribute(Attribute::WriteOnly) 5366 .addAttribute(Attribute::ArgMemOnly) 5367 .addAttribute(Attribute::Speculatable); 5368 F.removeFnAttrs(B); 5369 5370 return Visitor.runOnFunction(); 5371 } 5372