1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===// 2 // 3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. 4 // See https://llvm.org/LICENSE.txt for license information. 5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception 6 // 7 //===----------------------------------------------------------------------===// 8 // 9 /// \file 10 /// This file is a part of MemorySanitizer, a detector of uninitialized 11 /// reads. 12 /// 13 /// The algorithm of the tool is similar to Memcheck 14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every 15 /// byte of the application memory, poison the shadow of the malloc-ed 16 /// or alloca-ed memory, load the shadow bits on every memory read, 17 /// propagate the shadow bits through some of the arithmetic 18 /// instruction (including MOV), store the shadow bits on every memory 19 /// write, report a bug on some other instructions (e.g. JMP) if the 20 /// associated shadow is poisoned. 21 /// 22 /// But there are differences too. The first and the major one: 23 /// compiler instrumentation instead of binary instrumentation. This 24 /// gives us much better register allocation, possible compiler 25 /// optimizations and a fast start-up. But this brings the major issue 26 /// as well: msan needs to see all program events, including system 27 /// calls and reads/writes in system libraries, so we either need to 28 /// compile *everything* with msan or use a binary translation 29 /// component (e.g. DynamoRIO) to instrument pre-built libraries. 30 /// Another difference from Memcheck is that we use 8 shadow bits per 31 /// byte of application memory and use a direct shadow mapping. This 32 /// greatly simplifies the instrumentation code and avoids races on 33 /// shadow updates (Memcheck is single-threaded so races are not a 34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow 35 /// path storage that uses 8 bits per byte). 36 /// 37 /// The default value of shadow is 0, which means "clean" (not poisoned). 38 /// 39 /// Every module initializer should call __msan_init to ensure that the 40 /// shadow memory is ready. On error, __msan_warning is called. Since 41 /// parameters and return values may be passed via registers, we have a 42 /// specialized thread-local shadow for return values 43 /// (__msan_retval_tls) and parameters (__msan_param_tls). 44 /// 45 /// Origin tracking. 46 /// 47 /// MemorySanitizer can track origins (allocation points) of all uninitialized 48 /// values. This behavior is controlled with a flag (msan-track-origins) and is 49 /// disabled by default. 50 /// 51 /// Origins are 4-byte values created and interpreted by the runtime library. 52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes 53 /// of application memory. Propagation of origins is basically a bunch of 54 /// "select" instructions that pick the origin of a dirty argument, if an 55 /// instruction has one. 56 /// 57 /// Every 4 aligned, consecutive bytes of application memory have one origin 58 /// value associated with them. If these bytes contain uninitialized data 59 /// coming from 2 different allocations, the last store wins. Because of this, 60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in 61 /// practice. 62 /// 63 /// Origins are meaningless for fully initialized values, so MemorySanitizer 64 /// avoids storing origin to memory when a fully initialized value is stored. 65 /// This way it avoids needless overwriting origin of the 4-byte region on 66 /// a short (i.e. 1 byte) clean store, and it is also good for performance. 67 /// 68 /// Atomic handling. 69 /// 70 /// Ideally, every atomic store of application value should update the 71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store 72 /// of two disjoint locations can not be done without severe slowdown. 73 /// 74 /// Therefore, we implement an approximation that may err on the safe side. 75 /// In this implementation, every atomically accessed location in the program 76 /// may only change from (partially) uninitialized to fully initialized, but 77 /// not the other way around. We load the shadow _after_ the application load, 78 /// and we store the shadow _before_ the app store. Also, we always store clean 79 /// shadow (if the application store is atomic). This way, if the store-load 80 /// pair constitutes a happens-before arc, shadow store and load are correctly 81 /// ordered such that the load will get either the value that was stored, or 82 /// some later value (which is always clean). 83 /// 84 /// This does not work very well with Compare-And-Swap (CAS) and 85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW 86 /// must store the new shadow before the app operation, and load the shadow 87 /// after the app operation. Computers don't work this way. Current 88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean 89 /// value. It implements the store part as a simple atomic store by storing a 90 /// clean shadow. 91 /// 92 /// Instrumenting inline assembly. 93 /// 94 /// For inline assembly code LLVM has little idea about which memory locations 95 /// become initialized depending on the arguments. It can be possible to figure 96 /// out which arguments are meant to point to inputs and outputs, but the 97 /// actual semantics can be only visible at runtime. In the Linux kernel it's 98 /// also possible that the arguments only indicate the offset for a base taken 99 /// from a segment register, so it's dangerous to treat any asm() arguments as 100 /// pointers. We take a conservative approach generating calls to 101 /// __msan_instrument_asm_store(ptr, size) 102 /// , which defer the memory unpoisoning to the runtime library. 103 /// The latter can perform more complex address checks to figure out whether 104 /// it's safe to touch the shadow memory. 105 /// Like with atomic operations, we call __msan_instrument_asm_store() before 106 /// the assembly call, so that changes to the shadow memory will be seen by 107 /// other threads together with main memory initialization. 108 /// 109 /// KernelMemorySanitizer (KMSAN) implementation. 110 /// 111 /// The major differences between KMSAN and MSan instrumentation are: 112 /// - KMSAN always tracks the origins and implies msan-keep-going=true; 113 /// - KMSAN allocates shadow and origin memory for each page separately, so 114 /// there are no explicit accesses to shadow and origin in the 115 /// instrumentation. 116 /// Shadow and origin values for a particular X-byte memory location 117 /// (X=1,2,4,8) are accessed through pointers obtained via the 118 /// __msan_metadata_ptr_for_load_X(ptr) 119 /// __msan_metadata_ptr_for_store_X(ptr) 120 /// functions. The corresponding functions check that the X-byte accesses 121 /// are possible and returns the pointers to shadow and origin memory. 122 /// Arbitrary sized accesses are handled with: 123 /// __msan_metadata_ptr_for_load_n(ptr, size) 124 /// __msan_metadata_ptr_for_store_n(ptr, size); 125 /// - TLS variables are stored in a single per-task struct. A call to a 126 /// function __msan_get_context_state() returning a pointer to that struct 127 /// is inserted into every instrumented function before the entry block; 128 /// - __msan_warning() takes a 32-bit origin parameter; 129 /// - local variables are poisoned with __msan_poison_alloca() upon function 130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the 131 /// function; 132 /// - the pass doesn't declare any global variables or add global constructors 133 /// to the translation unit. 134 /// 135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm 136 /// calls, making sure we're on the safe side wrt. possible false positives. 137 /// 138 /// KernelMemorySanitizer only supports X86_64 at the moment. 139 /// 140 // 141 // FIXME: This sanitizer does not yet handle scalable vectors 142 // 143 //===----------------------------------------------------------------------===// 144 145 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h" 146 #include "llvm/ADT/APInt.h" 147 #include "llvm/ADT/ArrayRef.h" 148 #include "llvm/ADT/DepthFirstIterator.h" 149 #include "llvm/ADT/SmallSet.h" 150 #include "llvm/ADT/SmallString.h" 151 #include "llvm/ADT/SmallVector.h" 152 #include "llvm/ADT/StringExtras.h" 153 #include "llvm/ADT/StringRef.h" 154 #include "llvm/ADT/Triple.h" 155 #include "llvm/Analysis/TargetLibraryInfo.h" 156 #include "llvm/Analysis/ValueTracking.h" 157 #include "llvm/IR/Argument.h" 158 #include "llvm/IR/Attributes.h" 159 #include "llvm/IR/BasicBlock.h" 160 #include "llvm/IR/CallingConv.h" 161 #include "llvm/IR/Constant.h" 162 #include "llvm/IR/Constants.h" 163 #include "llvm/IR/DataLayout.h" 164 #include "llvm/IR/DerivedTypes.h" 165 #include "llvm/IR/Function.h" 166 #include "llvm/IR/GlobalValue.h" 167 #include "llvm/IR/GlobalVariable.h" 168 #include "llvm/IR/IRBuilder.h" 169 #include "llvm/IR/InlineAsm.h" 170 #include "llvm/IR/InstVisitor.h" 171 #include "llvm/IR/InstrTypes.h" 172 #include "llvm/IR/Instruction.h" 173 #include "llvm/IR/Instructions.h" 174 #include "llvm/IR/IntrinsicInst.h" 175 #include "llvm/IR/Intrinsics.h" 176 #include "llvm/IR/IntrinsicsX86.h" 177 #include "llvm/IR/LLVMContext.h" 178 #include "llvm/IR/MDBuilder.h" 179 #include "llvm/IR/Module.h" 180 #include "llvm/IR/Type.h" 181 #include "llvm/IR/Value.h" 182 #include "llvm/IR/ValueMap.h" 183 #include "llvm/InitializePasses.h" 184 #include "llvm/Pass.h" 185 #include "llvm/Support/AtomicOrdering.h" 186 #include "llvm/Support/Casting.h" 187 #include "llvm/Support/CommandLine.h" 188 #include "llvm/Support/Compiler.h" 189 #include "llvm/Support/Debug.h" 190 #include "llvm/Support/ErrorHandling.h" 191 #include "llvm/Support/MathExtras.h" 192 #include "llvm/Support/raw_ostream.h" 193 #include "llvm/Transforms/Instrumentation.h" 194 #include "llvm/Transforms/Utils/BasicBlockUtils.h" 195 #include "llvm/Transforms/Utils/Local.h" 196 #include "llvm/Transforms/Utils/ModuleUtils.h" 197 #include <algorithm> 198 #include <cassert> 199 #include <cstddef> 200 #include <cstdint> 201 #include <memory> 202 #include <string> 203 #include <tuple> 204 205 using namespace llvm; 206 207 #define DEBUG_TYPE "msan" 208 209 static const unsigned kOriginSize = 4; 210 static const Align kMinOriginAlignment = Align(4); 211 static const Align kShadowTLSAlignment = Align(8); 212 213 // These constants must be kept in sync with the ones in msan.h. 214 static const unsigned kParamTLSSize = 800; 215 static const unsigned kRetvalTLSSize = 800; 216 217 // Accesses sizes are powers of two: 1, 2, 4, 8. 218 static const size_t kNumberOfAccessSizes = 4; 219 220 /// Track origins of uninitialized values. 221 /// 222 /// Adds a section to MemorySanitizer report that points to the allocation 223 /// (stack or heap) the uninitialized bits came from originally. 224 static cl::opt<int> ClTrackOrigins("msan-track-origins", 225 cl::desc("Track origins (allocation sites) of poisoned memory"), 226 cl::Hidden, cl::init(0)); 227 228 static cl::opt<bool> ClKeepGoing("msan-keep-going", 229 cl::desc("keep going after reporting a UMR"), 230 cl::Hidden, cl::init(false)); 231 232 static cl::opt<bool> ClPoisonStack("msan-poison-stack", 233 cl::desc("poison uninitialized stack variables"), 234 cl::Hidden, cl::init(true)); 235 236 static cl::opt<bool> ClPoisonStackWithCall("msan-poison-stack-with-call", 237 cl::desc("poison uninitialized stack variables with a call"), 238 cl::Hidden, cl::init(false)); 239 240 static cl::opt<int> ClPoisonStackPattern("msan-poison-stack-pattern", 241 cl::desc("poison uninitialized stack variables with the given pattern"), 242 cl::Hidden, cl::init(0xff)); 243 244 static cl::opt<bool> ClPoisonUndef("msan-poison-undef", 245 cl::desc("poison undef temps"), 246 cl::Hidden, cl::init(true)); 247 248 static cl::opt<bool> ClHandleICmp("msan-handle-icmp", 249 cl::desc("propagate shadow through ICmpEQ and ICmpNE"), 250 cl::Hidden, cl::init(true)); 251 252 static cl::opt<bool> ClHandleICmpExact("msan-handle-icmp-exact", 253 cl::desc("exact handling of relational integer ICmp"), 254 cl::Hidden, cl::init(false)); 255 256 static cl::opt<bool> ClHandleLifetimeIntrinsics( 257 "msan-handle-lifetime-intrinsics", 258 cl::desc( 259 "when possible, poison scoped variables at the beginning of the scope " 260 "(slower, but more precise)"), 261 cl::Hidden, cl::init(true)); 262 263 // When compiling the Linux kernel, we sometimes see false positives related to 264 // MSan being unable to understand that inline assembly calls may initialize 265 // local variables. 266 // This flag makes the compiler conservatively unpoison every memory location 267 // passed into an assembly call. Note that this may cause false positives. 268 // Because it's impossible to figure out the array sizes, we can only unpoison 269 // the first sizeof(type) bytes for each type* pointer. 270 // The instrumentation is only enabled in KMSAN builds, and only if 271 // -msan-handle-asm-conservative is on. This is done because we may want to 272 // quickly disable assembly instrumentation when it breaks. 273 static cl::opt<bool> ClHandleAsmConservative( 274 "msan-handle-asm-conservative", 275 cl::desc("conservative handling of inline assembly"), cl::Hidden, 276 cl::init(true)); 277 278 // This flag controls whether we check the shadow of the address 279 // operand of load or store. Such bugs are very rare, since load from 280 // a garbage address typically results in SEGV, but still happen 281 // (e.g. only lower bits of address are garbage, or the access happens 282 // early at program startup where malloc-ed memory is more likely to 283 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown. 284 static cl::opt<bool> ClCheckAccessAddress("msan-check-access-address", 285 cl::desc("report accesses through a pointer which has poisoned shadow"), 286 cl::Hidden, cl::init(true)); 287 288 static cl::opt<bool> ClEagerChecks( 289 "msan-eager-checks", 290 cl::desc("check arguments and return values at function call boundaries"), 291 cl::Hidden, cl::init(false)); 292 293 static cl::opt<bool> ClDumpStrictInstructions("msan-dump-strict-instructions", 294 cl::desc("print out instructions with default strict semantics"), 295 cl::Hidden, cl::init(false)); 296 297 static cl::opt<int> ClInstrumentationWithCallThreshold( 298 "msan-instrumentation-with-call-threshold", 299 cl::desc( 300 "If the function being instrumented requires more than " 301 "this number of checks and origin stores, use callbacks instead of " 302 "inline checks (-1 means never use callbacks)."), 303 cl::Hidden, cl::init(3500)); 304 305 static cl::opt<bool> 306 ClEnableKmsan("msan-kernel", 307 cl::desc("Enable KernelMemorySanitizer instrumentation"), 308 cl::Hidden, cl::init(false)); 309 310 // This is an experiment to enable handling of cases where shadow is a non-zero 311 // compile-time constant. For some unexplainable reason they were silently 312 // ignored in the instrumentation. 313 static cl::opt<bool> ClCheckConstantShadow("msan-check-constant-shadow", 314 cl::desc("Insert checks for constant shadow values"), 315 cl::Hidden, cl::init(false)); 316 317 // This is off by default because of a bug in gold: 318 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002 319 static cl::opt<bool> ClWithComdat("msan-with-comdat", 320 cl::desc("Place MSan constructors in comdat sections"), 321 cl::Hidden, cl::init(false)); 322 323 // These options allow to specify custom memory map parameters 324 // See MemoryMapParams for details. 325 static cl::opt<uint64_t> ClAndMask("msan-and-mask", 326 cl::desc("Define custom MSan AndMask"), 327 cl::Hidden, cl::init(0)); 328 329 static cl::opt<uint64_t> ClXorMask("msan-xor-mask", 330 cl::desc("Define custom MSan XorMask"), 331 cl::Hidden, cl::init(0)); 332 333 static cl::opt<uint64_t> ClShadowBase("msan-shadow-base", 334 cl::desc("Define custom MSan ShadowBase"), 335 cl::Hidden, cl::init(0)); 336 337 static cl::opt<uint64_t> ClOriginBase("msan-origin-base", 338 cl::desc("Define custom MSan OriginBase"), 339 cl::Hidden, cl::init(0)); 340 341 const char kMsanModuleCtorName[] = "msan.module_ctor"; 342 const char kMsanInitName[] = "__msan_init"; 343 344 namespace { 345 346 // Memory map parameters used in application-to-shadow address calculation. 347 // Offset = (Addr & ~AndMask) ^ XorMask 348 // Shadow = ShadowBase + Offset 349 // Origin = OriginBase + Offset 350 struct MemoryMapParams { 351 uint64_t AndMask; 352 uint64_t XorMask; 353 uint64_t ShadowBase; 354 uint64_t OriginBase; 355 }; 356 357 struct PlatformMemoryMapParams { 358 const MemoryMapParams *bits32; 359 const MemoryMapParams *bits64; 360 }; 361 362 } // end anonymous namespace 363 364 // i386 Linux 365 static const MemoryMapParams Linux_I386_MemoryMapParams = { 366 0x000080000000, // AndMask 367 0, // XorMask (not used) 368 0, // ShadowBase (not used) 369 0x000040000000, // OriginBase 370 }; 371 372 // x86_64 Linux 373 static const MemoryMapParams Linux_X86_64_MemoryMapParams = { 374 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING 375 0x400000000000, // AndMask 376 0, // XorMask (not used) 377 0, // ShadowBase (not used) 378 0x200000000000, // OriginBase 379 #else 380 0, // AndMask (not used) 381 0x500000000000, // XorMask 382 0, // ShadowBase (not used) 383 0x100000000000, // OriginBase 384 #endif 385 }; 386 387 // mips64 Linux 388 static const MemoryMapParams Linux_MIPS64_MemoryMapParams = { 389 0, // AndMask (not used) 390 0x008000000000, // XorMask 391 0, // ShadowBase (not used) 392 0x002000000000, // OriginBase 393 }; 394 395 // ppc64 Linux 396 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams = { 397 0xE00000000000, // AndMask 398 0x100000000000, // XorMask 399 0x080000000000, // ShadowBase 400 0x1C0000000000, // OriginBase 401 }; 402 403 // s390x Linux 404 static const MemoryMapParams Linux_S390X_MemoryMapParams = { 405 0xC00000000000, // AndMask 406 0, // XorMask (not used) 407 0x080000000000, // ShadowBase 408 0x1C0000000000, // OriginBase 409 }; 410 411 // aarch64 Linux 412 static const MemoryMapParams Linux_AArch64_MemoryMapParams = { 413 0, // AndMask (not used) 414 0x06000000000, // XorMask 415 0, // ShadowBase (not used) 416 0x01000000000, // OriginBase 417 }; 418 419 // i386 FreeBSD 420 static const MemoryMapParams FreeBSD_I386_MemoryMapParams = { 421 0x000180000000, // AndMask 422 0x000040000000, // XorMask 423 0x000020000000, // ShadowBase 424 0x000700000000, // OriginBase 425 }; 426 427 // x86_64 FreeBSD 428 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = { 429 0xc00000000000, // AndMask 430 0x200000000000, // XorMask 431 0x100000000000, // ShadowBase 432 0x380000000000, // OriginBase 433 }; 434 435 // x86_64 NetBSD 436 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = { 437 0, // AndMask 438 0x500000000000, // XorMask 439 0, // ShadowBase 440 0x100000000000, // OriginBase 441 }; 442 443 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = { 444 &Linux_I386_MemoryMapParams, 445 &Linux_X86_64_MemoryMapParams, 446 }; 447 448 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams = { 449 nullptr, 450 &Linux_MIPS64_MemoryMapParams, 451 }; 452 453 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams = { 454 nullptr, 455 &Linux_PowerPC64_MemoryMapParams, 456 }; 457 458 static const PlatformMemoryMapParams Linux_S390_MemoryMapParams = { 459 nullptr, 460 &Linux_S390X_MemoryMapParams, 461 }; 462 463 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams = { 464 nullptr, 465 &Linux_AArch64_MemoryMapParams, 466 }; 467 468 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams = { 469 &FreeBSD_I386_MemoryMapParams, 470 &FreeBSD_X86_64_MemoryMapParams, 471 }; 472 473 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams = { 474 nullptr, 475 &NetBSD_X86_64_MemoryMapParams, 476 }; 477 478 namespace { 479 480 /// Instrument functions of a module to detect uninitialized reads. 481 /// 482 /// Instantiating MemorySanitizer inserts the msan runtime library API function 483 /// declarations into the module if they don't exist already. Instantiating 484 /// ensures the __msan_init function is in the list of global constructors for 485 /// the module. 486 class MemorySanitizer { 487 public: 488 MemorySanitizer(Module &M, MemorySanitizerOptions Options) 489 : CompileKernel(Options.Kernel), TrackOrigins(Options.TrackOrigins), 490 Recover(Options.Recover) { 491 initializeModule(M); 492 } 493 494 // MSan cannot be moved or copied because of MapParams. 495 MemorySanitizer(MemorySanitizer &&) = delete; 496 MemorySanitizer &operator=(MemorySanitizer &&) = delete; 497 MemorySanitizer(const MemorySanitizer &) = delete; 498 MemorySanitizer &operator=(const MemorySanitizer &) = delete; 499 500 bool sanitizeFunction(Function &F, TargetLibraryInfo &TLI); 501 502 private: 503 friend struct MemorySanitizerVisitor; 504 friend struct VarArgAMD64Helper; 505 friend struct VarArgMIPS64Helper; 506 friend struct VarArgAArch64Helper; 507 friend struct VarArgPowerPC64Helper; 508 friend struct VarArgSystemZHelper; 509 510 void initializeModule(Module &M); 511 void initializeCallbacks(Module &M); 512 void createKernelApi(Module &M); 513 void createUserspaceApi(Module &M); 514 515 /// True if we're compiling the Linux kernel. 516 bool CompileKernel; 517 /// Track origins (allocation points) of uninitialized values. 518 int TrackOrigins; 519 bool Recover; 520 521 LLVMContext *C; 522 Type *IntptrTy; 523 Type *OriginTy; 524 525 // XxxTLS variables represent the per-thread state in MSan and per-task state 526 // in KMSAN. 527 // For the userspace these point to thread-local globals. In the kernel land 528 // they point to the members of a per-task struct obtained via a call to 529 // __msan_get_context_state(). 530 531 /// Thread-local shadow storage for function parameters. 532 Value *ParamTLS; 533 534 /// Thread-local origin storage for function parameters. 535 Value *ParamOriginTLS; 536 537 /// Thread-local shadow storage for function return value. 538 Value *RetvalTLS; 539 540 /// Thread-local origin storage for function return value. 541 Value *RetvalOriginTLS; 542 543 /// Thread-local shadow storage for in-register va_arg function 544 /// parameters (x86_64-specific). 545 Value *VAArgTLS; 546 547 /// Thread-local shadow storage for in-register va_arg function 548 /// parameters (x86_64-specific). 549 Value *VAArgOriginTLS; 550 551 /// Thread-local shadow storage for va_arg overflow area 552 /// (x86_64-specific). 553 Value *VAArgOverflowSizeTLS; 554 555 /// Are the instrumentation callbacks set up? 556 bool CallbacksInitialized = false; 557 558 /// The run-time callback to print a warning. 559 FunctionCallee WarningFn; 560 561 // These arrays are indexed by log2(AccessSize). 562 FunctionCallee MaybeWarningFn[kNumberOfAccessSizes]; 563 FunctionCallee MaybeStoreOriginFn[kNumberOfAccessSizes]; 564 565 /// Run-time helper that generates a new origin value for a stack 566 /// allocation. 567 FunctionCallee MsanSetAllocaOrigin4Fn; 568 569 /// Run-time helper that poisons stack on function entry. 570 FunctionCallee MsanPoisonStackFn; 571 572 /// Run-time helper that records a store (or any event) of an 573 /// uninitialized value and returns an updated origin id encoding this info. 574 FunctionCallee MsanChainOriginFn; 575 576 /// Run-time helper that paints an origin over a region. 577 FunctionCallee MsanSetOriginFn; 578 579 /// MSan runtime replacements for memmove, memcpy and memset. 580 FunctionCallee MemmoveFn, MemcpyFn, MemsetFn; 581 582 /// KMSAN callback for task-local function argument shadow. 583 StructType *MsanContextStateTy; 584 FunctionCallee MsanGetContextStateFn; 585 586 /// Functions for poisoning/unpoisoning local variables 587 FunctionCallee MsanPoisonAllocaFn, MsanUnpoisonAllocaFn; 588 589 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin 590 /// pointers. 591 FunctionCallee MsanMetadataPtrForLoadN, MsanMetadataPtrForStoreN; 592 FunctionCallee MsanMetadataPtrForLoad_1_8[4]; 593 FunctionCallee MsanMetadataPtrForStore_1_8[4]; 594 FunctionCallee MsanInstrumentAsmStoreFn; 595 596 /// Helper to choose between different MsanMetadataPtrXxx(). 597 FunctionCallee getKmsanShadowOriginAccessFn(bool isStore, int size); 598 599 /// Memory map parameters used in application-to-shadow calculation. 600 const MemoryMapParams *MapParams; 601 602 /// Custom memory map parameters used when -msan-shadow-base or 603 // -msan-origin-base is provided. 604 MemoryMapParams CustomMapParams; 605 606 MDNode *ColdCallWeights; 607 608 /// Branch weights for origin store. 609 MDNode *OriginStoreWeights; 610 }; 611 612 void insertModuleCtor(Module &M) { 613 getOrCreateSanitizerCtorAndInitFunctions( 614 M, kMsanModuleCtorName, kMsanInitName, 615 /*InitArgTypes=*/{}, 616 /*InitArgs=*/{}, 617 // This callback is invoked when the functions are created the first 618 // time. Hook them into the global ctors list in that case: 619 [&](Function *Ctor, FunctionCallee) { 620 if (!ClWithComdat) { 621 appendToGlobalCtors(M, Ctor, 0); 622 return; 623 } 624 Comdat *MsanCtorComdat = M.getOrInsertComdat(kMsanModuleCtorName); 625 Ctor->setComdat(MsanCtorComdat); 626 appendToGlobalCtors(M, Ctor, 0, Ctor); 627 }); 628 } 629 630 /// A legacy function pass for msan instrumentation. 631 /// 632 /// Instruments functions to detect uninitialized reads. 633 struct MemorySanitizerLegacyPass : public FunctionPass { 634 // Pass identification, replacement for typeid. 635 static char ID; 636 637 MemorySanitizerLegacyPass(MemorySanitizerOptions Options = {}) 638 : FunctionPass(ID), Options(Options) { 639 initializeMemorySanitizerLegacyPassPass(*PassRegistry::getPassRegistry()); 640 } 641 StringRef getPassName() const override { return "MemorySanitizerLegacyPass"; } 642 643 void getAnalysisUsage(AnalysisUsage &AU) const override { 644 AU.addRequired<TargetLibraryInfoWrapperPass>(); 645 } 646 647 bool runOnFunction(Function &F) override { 648 return MSan->sanitizeFunction( 649 F, getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F)); 650 } 651 bool doInitialization(Module &M) override; 652 653 Optional<MemorySanitizer> MSan; 654 MemorySanitizerOptions Options; 655 }; 656 657 template <class T> T getOptOrDefault(const cl::opt<T> &Opt, T Default) { 658 return (Opt.getNumOccurrences() > 0) ? Opt : Default; 659 } 660 661 } // end anonymous namespace 662 663 MemorySanitizerOptions::MemorySanitizerOptions(int TO, bool R, bool K) 664 : Kernel(getOptOrDefault(ClEnableKmsan, K)), 665 TrackOrigins(getOptOrDefault(ClTrackOrigins, Kernel ? 2 : TO)), 666 Recover(getOptOrDefault(ClKeepGoing, Kernel || R)) {} 667 668 PreservedAnalyses MemorySanitizerPass::run(Function &F, 669 FunctionAnalysisManager &FAM) { 670 MemorySanitizer Msan(*F.getParent(), Options); 671 if (Msan.sanitizeFunction(F, FAM.getResult<TargetLibraryAnalysis>(F))) 672 return PreservedAnalyses::none(); 673 return PreservedAnalyses::all(); 674 } 675 676 PreservedAnalyses 677 ModuleMemorySanitizerPass::run(Module &M, ModuleAnalysisManager &AM) { 678 if (Options.Kernel) 679 return PreservedAnalyses::all(); 680 insertModuleCtor(M); 681 return PreservedAnalyses::none(); 682 } 683 684 void MemorySanitizerPass::printPipeline( 685 raw_ostream &OS, function_ref<StringRef(StringRef)> MapClassName2PassName) { 686 static_cast<PassInfoMixin<MemorySanitizerPass> *>(this)->printPipeline( 687 OS, MapClassName2PassName); 688 OS << "<"; 689 if (Options.Recover) 690 OS << "recover;"; 691 if (Options.Kernel) 692 OS << "kernel;"; 693 OS << "track-origins=" << Options.TrackOrigins; 694 OS << ">"; 695 } 696 697 char MemorySanitizerLegacyPass::ID = 0; 698 699 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass, "msan", 700 "MemorySanitizer: detects uninitialized reads.", false, 701 false) 702 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) 703 INITIALIZE_PASS_END(MemorySanitizerLegacyPass, "msan", 704 "MemorySanitizer: detects uninitialized reads.", false, 705 false) 706 707 FunctionPass * 708 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options) { 709 return new MemorySanitizerLegacyPass(Options); 710 } 711 712 /// Create a non-const global initialized with the given string. 713 /// 714 /// Creates a writable global for Str so that we can pass it to the 715 /// run-time lib. Runtime uses first 4 bytes of the string to store the 716 /// frame ID, so the string needs to be mutable. 717 static GlobalVariable *createPrivateNonConstGlobalForString(Module &M, 718 StringRef Str) { 719 Constant *StrConst = ConstantDataArray::getString(M.getContext(), Str); 720 return new GlobalVariable(M, StrConst->getType(), /*isConstant=*/false, 721 GlobalValue::PrivateLinkage, StrConst, ""); 722 } 723 724 /// Create KMSAN API callbacks. 725 void MemorySanitizer::createKernelApi(Module &M) { 726 IRBuilder<> IRB(*C); 727 728 // These will be initialized in insertKmsanPrologue(). 729 RetvalTLS = nullptr; 730 RetvalOriginTLS = nullptr; 731 ParamTLS = nullptr; 732 ParamOriginTLS = nullptr; 733 VAArgTLS = nullptr; 734 VAArgOriginTLS = nullptr; 735 VAArgOverflowSizeTLS = nullptr; 736 737 WarningFn = M.getOrInsertFunction("__msan_warning", IRB.getVoidTy(), 738 IRB.getInt32Ty()); 739 // Requests the per-task context state (kmsan_context_state*) from the 740 // runtime library. 741 MsanContextStateTy = StructType::get( 742 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 743 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8), 744 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), 745 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8), /* va_arg_origin */ 746 IRB.getInt64Ty(), ArrayType::get(OriginTy, kParamTLSSize / 4), OriginTy, 747 OriginTy); 748 MsanGetContextStateFn = M.getOrInsertFunction( 749 "__msan_get_context_state", PointerType::get(MsanContextStateTy, 0)); 750 751 Type *RetTy = StructType::get(PointerType::get(IRB.getInt8Ty(), 0), 752 PointerType::get(IRB.getInt32Ty(), 0)); 753 754 for (int ind = 0, size = 1; ind < 4; ind++, size <<= 1) { 755 std::string name_load = 756 "__msan_metadata_ptr_for_load_" + std::to_string(size); 757 std::string name_store = 758 "__msan_metadata_ptr_for_store_" + std::to_string(size); 759 MsanMetadataPtrForLoad_1_8[ind] = M.getOrInsertFunction( 760 name_load, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 761 MsanMetadataPtrForStore_1_8[ind] = M.getOrInsertFunction( 762 name_store, RetTy, PointerType::get(IRB.getInt8Ty(), 0)); 763 } 764 765 MsanMetadataPtrForLoadN = M.getOrInsertFunction( 766 "__msan_metadata_ptr_for_load_n", RetTy, 767 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 768 MsanMetadataPtrForStoreN = M.getOrInsertFunction( 769 "__msan_metadata_ptr_for_store_n", RetTy, 770 PointerType::get(IRB.getInt8Ty(), 0), IRB.getInt64Ty()); 771 772 // Functions for poisoning and unpoisoning memory. 773 MsanPoisonAllocaFn = 774 M.getOrInsertFunction("__msan_poison_alloca", IRB.getVoidTy(), 775 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt8PtrTy()); 776 MsanUnpoisonAllocaFn = M.getOrInsertFunction( 777 "__msan_unpoison_alloca", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy); 778 } 779 780 static Constant *getOrInsertGlobal(Module &M, StringRef Name, Type *Ty) { 781 return M.getOrInsertGlobal(Name, Ty, [&] { 782 return new GlobalVariable(M, Ty, false, GlobalVariable::ExternalLinkage, 783 nullptr, Name, nullptr, 784 GlobalVariable::InitialExecTLSModel); 785 }); 786 } 787 788 /// Insert declarations for userspace-specific functions and globals. 789 void MemorySanitizer::createUserspaceApi(Module &M) { 790 IRBuilder<> IRB(*C); 791 792 // Create the callback. 793 // FIXME: this function should have "Cold" calling conv, 794 // which is not yet implemented. 795 StringRef WarningFnName = Recover ? "__msan_warning_with_origin" 796 : "__msan_warning_with_origin_noreturn"; 797 WarningFn = 798 M.getOrInsertFunction(WarningFnName, IRB.getVoidTy(), IRB.getInt32Ty()); 799 800 // Create the global TLS variables. 801 RetvalTLS = 802 getOrInsertGlobal(M, "__msan_retval_tls", 803 ArrayType::get(IRB.getInt64Ty(), kRetvalTLSSize / 8)); 804 805 RetvalOriginTLS = getOrInsertGlobal(M, "__msan_retval_origin_tls", OriginTy); 806 807 ParamTLS = 808 getOrInsertGlobal(M, "__msan_param_tls", 809 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 810 811 ParamOriginTLS = 812 getOrInsertGlobal(M, "__msan_param_origin_tls", 813 ArrayType::get(OriginTy, kParamTLSSize / 4)); 814 815 VAArgTLS = 816 getOrInsertGlobal(M, "__msan_va_arg_tls", 817 ArrayType::get(IRB.getInt64Ty(), kParamTLSSize / 8)); 818 819 VAArgOriginTLS = 820 getOrInsertGlobal(M, "__msan_va_arg_origin_tls", 821 ArrayType::get(OriginTy, kParamTLSSize / 4)); 822 823 VAArgOverflowSizeTLS = 824 getOrInsertGlobal(M, "__msan_va_arg_overflow_size_tls", IRB.getInt64Ty()); 825 826 for (size_t AccessSizeIndex = 0; AccessSizeIndex < kNumberOfAccessSizes; 827 AccessSizeIndex++) { 828 unsigned AccessSize = 1 << AccessSizeIndex; 829 std::string FunctionName = "__msan_maybe_warning_" + itostr(AccessSize); 830 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeWarningFnAttrs; 831 MaybeWarningFnAttrs.push_back(std::make_pair( 832 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 833 MaybeWarningFnAttrs.push_back(std::make_pair( 834 AttributeList::FirstArgIndex + 1, Attribute::get(*C, Attribute::ZExt))); 835 MaybeWarningFn[AccessSizeIndex] = M.getOrInsertFunction( 836 FunctionName, AttributeList::get(*C, MaybeWarningFnAttrs), 837 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt32Ty()); 838 839 FunctionName = "__msan_maybe_store_origin_" + itostr(AccessSize); 840 SmallVector<std::pair<unsigned, Attribute>, 2> MaybeStoreOriginFnAttrs; 841 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 842 AttributeList::FirstArgIndex, Attribute::get(*C, Attribute::ZExt))); 843 MaybeStoreOriginFnAttrs.push_back(std::make_pair( 844 AttributeList::FirstArgIndex + 2, Attribute::get(*C, Attribute::ZExt))); 845 MaybeStoreOriginFn[AccessSizeIndex] = M.getOrInsertFunction( 846 FunctionName, AttributeList::get(*C, MaybeStoreOriginFnAttrs), 847 IRB.getVoidTy(), IRB.getIntNTy(AccessSize * 8), IRB.getInt8PtrTy(), 848 IRB.getInt32Ty()); 849 } 850 851 MsanSetAllocaOrigin4Fn = M.getOrInsertFunction( 852 "__msan_set_alloca_origin4", IRB.getVoidTy(), IRB.getInt8PtrTy(), IntptrTy, 853 IRB.getInt8PtrTy(), IntptrTy); 854 MsanPoisonStackFn = 855 M.getOrInsertFunction("__msan_poison_stack", IRB.getVoidTy(), 856 IRB.getInt8PtrTy(), IntptrTy); 857 } 858 859 /// Insert extern declaration of runtime-provided functions and globals. 860 void MemorySanitizer::initializeCallbacks(Module &M) { 861 // Only do this once. 862 if (CallbacksInitialized) 863 return; 864 865 IRBuilder<> IRB(*C); 866 // Initialize callbacks that are common for kernel and userspace 867 // instrumentation. 868 MsanChainOriginFn = M.getOrInsertFunction( 869 "__msan_chain_origin", IRB.getInt32Ty(), IRB.getInt32Ty()); 870 MsanSetOriginFn = 871 M.getOrInsertFunction("__msan_set_origin", IRB.getVoidTy(), 872 IRB.getInt8PtrTy(), IntptrTy, IRB.getInt32Ty()); 873 MemmoveFn = M.getOrInsertFunction( 874 "__msan_memmove", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 875 IRB.getInt8PtrTy(), IntptrTy); 876 MemcpyFn = M.getOrInsertFunction( 877 "__msan_memcpy", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), 878 IntptrTy); 879 MemsetFn = M.getOrInsertFunction( 880 "__msan_memset", IRB.getInt8PtrTy(), IRB.getInt8PtrTy(), IRB.getInt32Ty(), 881 IntptrTy); 882 883 MsanInstrumentAsmStoreFn = 884 M.getOrInsertFunction("__msan_instrument_asm_store", IRB.getVoidTy(), 885 PointerType::get(IRB.getInt8Ty(), 0), IntptrTy); 886 887 if (CompileKernel) { 888 createKernelApi(M); 889 } else { 890 createUserspaceApi(M); 891 } 892 CallbacksInitialized = true; 893 } 894 895 FunctionCallee MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore, 896 int size) { 897 FunctionCallee *Fns = 898 isStore ? MsanMetadataPtrForStore_1_8 : MsanMetadataPtrForLoad_1_8; 899 switch (size) { 900 case 1: 901 return Fns[0]; 902 case 2: 903 return Fns[1]; 904 case 4: 905 return Fns[2]; 906 case 8: 907 return Fns[3]; 908 default: 909 return nullptr; 910 } 911 } 912 913 /// Module-level initialization. 914 /// 915 /// inserts a call to __msan_init to the module's constructor list. 916 void MemorySanitizer::initializeModule(Module &M) { 917 auto &DL = M.getDataLayout(); 918 919 bool ShadowPassed = ClShadowBase.getNumOccurrences() > 0; 920 bool OriginPassed = ClOriginBase.getNumOccurrences() > 0; 921 // Check the overrides first 922 if (ShadowPassed || OriginPassed) { 923 CustomMapParams.AndMask = ClAndMask; 924 CustomMapParams.XorMask = ClXorMask; 925 CustomMapParams.ShadowBase = ClShadowBase; 926 CustomMapParams.OriginBase = ClOriginBase; 927 MapParams = &CustomMapParams; 928 } else { 929 Triple TargetTriple(M.getTargetTriple()); 930 switch (TargetTriple.getOS()) { 931 case Triple::FreeBSD: 932 switch (TargetTriple.getArch()) { 933 case Triple::x86_64: 934 MapParams = FreeBSD_X86_MemoryMapParams.bits64; 935 break; 936 case Triple::x86: 937 MapParams = FreeBSD_X86_MemoryMapParams.bits32; 938 break; 939 default: 940 report_fatal_error("unsupported architecture"); 941 } 942 break; 943 case Triple::NetBSD: 944 switch (TargetTriple.getArch()) { 945 case Triple::x86_64: 946 MapParams = NetBSD_X86_MemoryMapParams.bits64; 947 break; 948 default: 949 report_fatal_error("unsupported architecture"); 950 } 951 break; 952 case Triple::Linux: 953 switch (TargetTriple.getArch()) { 954 case Triple::x86_64: 955 MapParams = Linux_X86_MemoryMapParams.bits64; 956 break; 957 case Triple::x86: 958 MapParams = Linux_X86_MemoryMapParams.bits32; 959 break; 960 case Triple::mips64: 961 case Triple::mips64el: 962 MapParams = Linux_MIPS_MemoryMapParams.bits64; 963 break; 964 case Triple::ppc64: 965 case Triple::ppc64le: 966 MapParams = Linux_PowerPC_MemoryMapParams.bits64; 967 break; 968 case Triple::systemz: 969 MapParams = Linux_S390_MemoryMapParams.bits64; 970 break; 971 case Triple::aarch64: 972 case Triple::aarch64_be: 973 MapParams = Linux_ARM_MemoryMapParams.bits64; 974 break; 975 default: 976 report_fatal_error("unsupported architecture"); 977 } 978 break; 979 default: 980 report_fatal_error("unsupported operating system"); 981 } 982 } 983 984 C = &(M.getContext()); 985 IRBuilder<> IRB(*C); 986 IntptrTy = IRB.getIntPtrTy(DL); 987 OriginTy = IRB.getInt32Ty(); 988 989 ColdCallWeights = MDBuilder(*C).createBranchWeights(1, 1000); 990 OriginStoreWeights = MDBuilder(*C).createBranchWeights(1, 1000); 991 992 if (!CompileKernel) { 993 if (TrackOrigins) 994 M.getOrInsertGlobal("__msan_track_origins", IRB.getInt32Ty(), [&] { 995 return new GlobalVariable( 996 M, IRB.getInt32Ty(), true, GlobalValue::WeakODRLinkage, 997 IRB.getInt32(TrackOrigins), "__msan_track_origins"); 998 }); 999 1000 if (Recover) 1001 M.getOrInsertGlobal("__msan_keep_going", IRB.getInt32Ty(), [&] { 1002 return new GlobalVariable(M, IRB.getInt32Ty(), true, 1003 GlobalValue::WeakODRLinkage, 1004 IRB.getInt32(Recover), "__msan_keep_going"); 1005 }); 1006 } 1007 } 1008 1009 bool MemorySanitizerLegacyPass::doInitialization(Module &M) { 1010 if (!Options.Kernel) 1011 insertModuleCtor(M); 1012 MSan.emplace(M, Options); 1013 return true; 1014 } 1015 1016 namespace { 1017 1018 /// A helper class that handles instrumentation of VarArg 1019 /// functions on a particular platform. 1020 /// 1021 /// Implementations are expected to insert the instrumentation 1022 /// necessary to propagate argument shadow through VarArg function 1023 /// calls. Visit* methods are called during an InstVisitor pass over 1024 /// the function, and should avoid creating new basic blocks. A new 1025 /// instance of this class is created for each instrumented function. 1026 struct VarArgHelper { 1027 virtual ~VarArgHelper() = default; 1028 1029 /// Visit a CallBase. 1030 virtual void visitCallBase(CallBase &CB, IRBuilder<> &IRB) = 0; 1031 1032 /// Visit a va_start call. 1033 virtual void visitVAStartInst(VAStartInst &I) = 0; 1034 1035 /// Visit a va_copy call. 1036 virtual void visitVACopyInst(VACopyInst &I) = 0; 1037 1038 /// Finalize function instrumentation. 1039 /// 1040 /// This method is called after visiting all interesting (see above) 1041 /// instructions in a function. 1042 virtual void finalizeInstrumentation() = 0; 1043 }; 1044 1045 struct MemorySanitizerVisitor; 1046 1047 } // end anonymous namespace 1048 1049 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 1050 MemorySanitizerVisitor &Visitor); 1051 1052 static unsigned TypeSizeToSizeIndex(unsigned TypeSize) { 1053 if (TypeSize <= 8) return 0; 1054 return Log2_32_Ceil((TypeSize + 7) / 8); 1055 } 1056 1057 namespace { 1058 1059 /// This class does all the work for a given function. Store and Load 1060 /// instructions store and load corresponding shadow and origin 1061 /// values. Most instructions propagate shadow from arguments to their 1062 /// return values. Certain instructions (most importantly, BranchInst) 1063 /// test their argument shadow and print reports (with a runtime call) if it's 1064 /// non-zero. 1065 struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> { 1066 Function &F; 1067 MemorySanitizer &MS; 1068 SmallVector<PHINode *, 16> ShadowPHINodes, OriginPHINodes; 1069 ValueMap<Value*, Value*> ShadowMap, OriginMap; 1070 std::unique_ptr<VarArgHelper> VAHelper; 1071 const TargetLibraryInfo *TLI; 1072 Instruction *FnPrologueEnd; 1073 1074 // The following flags disable parts of MSan instrumentation based on 1075 // exclusion list contents and command-line options. 1076 bool InsertChecks; 1077 bool PropagateShadow; 1078 bool PoisonStack; 1079 bool PoisonUndef; 1080 1081 struct ShadowOriginAndInsertPoint { 1082 Value *Shadow; 1083 Value *Origin; 1084 Instruction *OrigIns; 1085 1086 ShadowOriginAndInsertPoint(Value *S, Value *O, Instruction *I) 1087 : Shadow(S), Origin(O), OrigIns(I) {} 1088 }; 1089 SmallVector<ShadowOriginAndInsertPoint, 16> InstrumentationList; 1090 bool InstrumentLifetimeStart = ClHandleLifetimeIntrinsics; 1091 SmallSet<AllocaInst *, 16> AllocaSet; 1092 SmallVector<std::pair<IntrinsicInst *, AllocaInst *>, 16> LifetimeStartList; 1093 SmallVector<StoreInst *, 16> StoreList; 1094 1095 MemorySanitizerVisitor(Function &F, MemorySanitizer &MS, 1096 const TargetLibraryInfo &TLI) 1097 : F(F), MS(MS), VAHelper(CreateVarArgHelper(F, MS, *this)), TLI(&TLI) { 1098 bool SanitizeFunction = F.hasFnAttribute(Attribute::SanitizeMemory); 1099 InsertChecks = SanitizeFunction; 1100 PropagateShadow = SanitizeFunction; 1101 PoisonStack = SanitizeFunction && ClPoisonStack; 1102 PoisonUndef = SanitizeFunction && ClPoisonUndef; 1103 1104 // In the presence of unreachable blocks, we may see Phi nodes with 1105 // incoming nodes from such blocks. Since InstVisitor skips unreachable 1106 // blocks, such nodes will not have any shadow value associated with them. 1107 // It's easier to remove unreachable blocks than deal with missing shadow. 1108 removeUnreachableBlocks(F); 1109 1110 MS.initializeCallbacks(*F.getParent()); 1111 FnPrologueEnd = IRBuilder<>(F.getEntryBlock().getFirstNonPHI()) 1112 .CreateIntrinsic(Intrinsic::donothing, {}, {}); 1113 1114 if (MS.CompileKernel) { 1115 IRBuilder<> IRB(FnPrologueEnd); 1116 insertKmsanPrologue(IRB); 1117 } 1118 1119 LLVM_DEBUG(if (!InsertChecks) dbgs() 1120 << "MemorySanitizer is not inserting checks into '" 1121 << F.getName() << "'\n"); 1122 } 1123 1124 bool isInPrologue(Instruction &I) { 1125 return I.getParent() == FnPrologueEnd->getParent() && 1126 (&I == FnPrologueEnd || I.comesBefore(FnPrologueEnd)); 1127 } 1128 1129 Value *updateOrigin(Value *V, IRBuilder<> &IRB) { 1130 if (MS.TrackOrigins <= 1) return V; 1131 return IRB.CreateCall(MS.MsanChainOriginFn, V); 1132 } 1133 1134 Value *originToIntptr(IRBuilder<> &IRB, Value *Origin) { 1135 const DataLayout &DL = F.getParent()->getDataLayout(); 1136 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1137 if (IntptrSize == kOriginSize) return Origin; 1138 assert(IntptrSize == kOriginSize * 2); 1139 Origin = IRB.CreateIntCast(Origin, MS.IntptrTy, /* isSigned */ false); 1140 return IRB.CreateOr(Origin, IRB.CreateShl(Origin, kOriginSize * 8)); 1141 } 1142 1143 /// Fill memory range with the given origin value. 1144 void paintOrigin(IRBuilder<> &IRB, Value *Origin, Value *OriginPtr, 1145 unsigned Size, Align Alignment) { 1146 const DataLayout &DL = F.getParent()->getDataLayout(); 1147 const Align IntptrAlignment = DL.getABITypeAlign(MS.IntptrTy); 1148 unsigned IntptrSize = DL.getTypeStoreSize(MS.IntptrTy); 1149 assert(IntptrAlignment >= kMinOriginAlignment); 1150 assert(IntptrSize >= kOriginSize); 1151 1152 unsigned Ofs = 0; 1153 Align CurrentAlignment = Alignment; 1154 if (Alignment >= IntptrAlignment && IntptrSize > kOriginSize) { 1155 Value *IntptrOrigin = originToIntptr(IRB, Origin); 1156 Value *IntptrOriginPtr = 1157 IRB.CreatePointerCast(OriginPtr, PointerType::get(MS.IntptrTy, 0)); 1158 for (unsigned i = 0; i < Size / IntptrSize; ++i) { 1159 Value *Ptr = i ? IRB.CreateConstGEP1_32(MS.IntptrTy, IntptrOriginPtr, i) 1160 : IntptrOriginPtr; 1161 IRB.CreateAlignedStore(IntptrOrigin, Ptr, CurrentAlignment); 1162 Ofs += IntptrSize / kOriginSize; 1163 CurrentAlignment = IntptrAlignment; 1164 } 1165 } 1166 1167 for (unsigned i = Ofs; i < (Size + kOriginSize - 1) / kOriginSize; ++i) { 1168 Value *GEP = 1169 i ? IRB.CreateConstGEP1_32(MS.OriginTy, OriginPtr, i) : OriginPtr; 1170 IRB.CreateAlignedStore(Origin, GEP, CurrentAlignment); 1171 CurrentAlignment = kMinOriginAlignment; 1172 } 1173 } 1174 1175 void storeOrigin(IRBuilder<> &IRB, Value *Addr, Value *Shadow, Value *Origin, 1176 Value *OriginPtr, Align Alignment, bool AsCall) { 1177 const DataLayout &DL = F.getParent()->getDataLayout(); 1178 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1179 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 1180 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1181 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1182 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) 1183 paintOrigin(IRB, updateOrigin(Origin, IRB), OriginPtr, StoreSize, 1184 OriginAlignment); 1185 return; 1186 } 1187 1188 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1189 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1190 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1191 FunctionCallee Fn = MS.MaybeStoreOriginFn[SizeIndex]; 1192 Value *ConvertedShadow2 = 1193 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1194 CallBase *CB = IRB.CreateCall( 1195 Fn, {ConvertedShadow2, 1196 IRB.CreatePointerCast(Addr, IRB.getInt8PtrTy()), Origin}); 1197 CB->addParamAttr(0, Attribute::ZExt); 1198 CB->addParamAttr(2, Attribute::ZExt); 1199 } else { 1200 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1201 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1202 Cmp, &*IRB.GetInsertPoint(), false, MS.OriginStoreWeights); 1203 IRBuilder<> IRBNew(CheckTerm); 1204 paintOrigin(IRBNew, updateOrigin(Origin, IRBNew), OriginPtr, StoreSize, 1205 OriginAlignment); 1206 } 1207 } 1208 1209 void materializeStores(bool InstrumentWithCalls) { 1210 for (StoreInst *SI : StoreList) { 1211 IRBuilder<> IRB(SI); 1212 Value *Val = SI->getValueOperand(); 1213 Value *Addr = SI->getPointerOperand(); 1214 Value *Shadow = SI->isAtomic() ? getCleanShadow(Val) : getShadow(Val); 1215 Value *ShadowPtr, *OriginPtr; 1216 Type *ShadowTy = Shadow->getType(); 1217 const Align Alignment = assumeAligned(SI->getAlignment()); 1218 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1219 std::tie(ShadowPtr, OriginPtr) = 1220 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ true); 1221 1222 StoreInst *NewSI = IRB.CreateAlignedStore(Shadow, ShadowPtr, Alignment); 1223 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI << "\n"); 1224 (void)NewSI; 1225 1226 if (SI->isAtomic()) 1227 SI->setOrdering(addReleaseOrdering(SI->getOrdering())); 1228 1229 if (MS.TrackOrigins && !SI->isAtomic()) 1230 storeOrigin(IRB, Addr, Shadow, getOrigin(Val), OriginPtr, 1231 OriginAlignment, InstrumentWithCalls); 1232 } 1233 } 1234 1235 /// Helper function to insert a warning at IRB's current insert point. 1236 void insertWarningFn(IRBuilder<> &IRB, Value *Origin) { 1237 if (!Origin) 1238 Origin = (Value *)IRB.getInt32(0); 1239 assert(Origin->getType()->isIntegerTy()); 1240 IRB.CreateCall(MS.WarningFn, Origin)->setCannotMerge(); 1241 // FIXME: Insert UnreachableInst if !MS.Recover? 1242 // This may invalidate some of the following checks and needs to be done 1243 // at the very end. 1244 } 1245 1246 void materializeOneCheck(Instruction *OrigIns, Value *Shadow, Value *Origin, 1247 bool AsCall) { 1248 IRBuilder<> IRB(OrigIns); 1249 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow << "\n"); 1250 Value *ConvertedShadow = convertShadowToScalar(Shadow, IRB); 1251 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow << "\n"); 1252 1253 if (auto *ConstantShadow = dyn_cast<Constant>(ConvertedShadow)) { 1254 if (ClCheckConstantShadow && !ConstantShadow->isZeroValue()) { 1255 insertWarningFn(IRB, Origin); 1256 } 1257 return; 1258 } 1259 1260 const DataLayout &DL = OrigIns->getModule()->getDataLayout(); 1261 1262 unsigned TypeSizeInBits = DL.getTypeSizeInBits(ConvertedShadow->getType()); 1263 unsigned SizeIndex = TypeSizeToSizeIndex(TypeSizeInBits); 1264 if (AsCall && SizeIndex < kNumberOfAccessSizes && !MS.CompileKernel) { 1265 FunctionCallee Fn = MS.MaybeWarningFn[SizeIndex]; 1266 Value *ConvertedShadow2 = 1267 IRB.CreateZExt(ConvertedShadow, IRB.getIntNTy(8 * (1 << SizeIndex))); 1268 CallBase *CB = IRB.CreateCall( 1269 Fn, {ConvertedShadow2, 1270 MS.TrackOrigins && Origin ? Origin : (Value *)IRB.getInt32(0)}); 1271 CB->addParamAttr(0, Attribute::ZExt); 1272 CB->addParamAttr(1, Attribute::ZExt); 1273 } else { 1274 Value *Cmp = convertToBool(ConvertedShadow, IRB, "_mscmp"); 1275 Instruction *CheckTerm = SplitBlockAndInsertIfThen( 1276 Cmp, OrigIns, 1277 /* Unreachable */ !MS.Recover, MS.ColdCallWeights); 1278 1279 IRB.SetInsertPoint(CheckTerm); 1280 insertWarningFn(IRB, Origin); 1281 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp << "\n"); 1282 } 1283 } 1284 1285 void materializeChecks(bool InstrumentWithCalls) { 1286 for (const auto &ShadowData : InstrumentationList) { 1287 Instruction *OrigIns = ShadowData.OrigIns; 1288 Value *Shadow = ShadowData.Shadow; 1289 Value *Origin = ShadowData.Origin; 1290 materializeOneCheck(OrigIns, Shadow, Origin, InstrumentWithCalls); 1291 } 1292 LLVM_DEBUG(dbgs() << "DONE:\n" << F); 1293 } 1294 1295 // Returns the last instruction in the new prologue 1296 void insertKmsanPrologue(IRBuilder<> &IRB) { 1297 Value *ContextState = IRB.CreateCall(MS.MsanGetContextStateFn, {}); 1298 Constant *Zero = IRB.getInt32(0); 1299 MS.ParamTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1300 {Zero, IRB.getInt32(0)}, "param_shadow"); 1301 MS.RetvalTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1302 {Zero, IRB.getInt32(1)}, "retval_shadow"); 1303 MS.VAArgTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1304 {Zero, IRB.getInt32(2)}, "va_arg_shadow"); 1305 MS.VAArgOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1306 {Zero, IRB.getInt32(3)}, "va_arg_origin"); 1307 MS.VAArgOverflowSizeTLS = 1308 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1309 {Zero, IRB.getInt32(4)}, "va_arg_overflow_size"); 1310 MS.ParamOriginTLS = IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1311 {Zero, IRB.getInt32(5)}, "param_origin"); 1312 MS.RetvalOriginTLS = 1313 IRB.CreateGEP(MS.MsanContextStateTy, ContextState, 1314 {Zero, IRB.getInt32(6)}, "retval_origin"); 1315 } 1316 1317 /// Add MemorySanitizer instrumentation to a function. 1318 bool runOnFunction() { 1319 // Iterate all BBs in depth-first order and create shadow instructions 1320 // for all instructions (where applicable). 1321 // For PHI nodes we create dummy shadow PHIs which will be finalized later. 1322 for (BasicBlock *BB : depth_first(FnPrologueEnd->getParent())) 1323 visit(*BB); 1324 1325 // Finalize PHI nodes. 1326 for (PHINode *PN : ShadowPHINodes) { 1327 PHINode *PNS = cast<PHINode>(getShadow(PN)); 1328 PHINode *PNO = MS.TrackOrigins ? cast<PHINode>(getOrigin(PN)) : nullptr; 1329 size_t NumValues = PN->getNumIncomingValues(); 1330 for (size_t v = 0; v < NumValues; v++) { 1331 PNS->addIncoming(getShadow(PN, v), PN->getIncomingBlock(v)); 1332 if (PNO) PNO->addIncoming(getOrigin(PN, v), PN->getIncomingBlock(v)); 1333 } 1334 } 1335 1336 VAHelper->finalizeInstrumentation(); 1337 1338 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to 1339 // instrumenting only allocas. 1340 if (InstrumentLifetimeStart) { 1341 for (auto Item : LifetimeStartList) { 1342 instrumentAlloca(*Item.second, Item.first); 1343 AllocaSet.erase(Item.second); 1344 } 1345 } 1346 // Poison the allocas for which we didn't instrument the corresponding 1347 // lifetime intrinsics. 1348 for (AllocaInst *AI : AllocaSet) 1349 instrumentAlloca(*AI); 1350 1351 bool InstrumentWithCalls = ClInstrumentationWithCallThreshold >= 0 && 1352 InstrumentationList.size() + StoreList.size() > 1353 (unsigned)ClInstrumentationWithCallThreshold; 1354 1355 // Insert shadow value checks. 1356 materializeChecks(InstrumentWithCalls); 1357 1358 // Delayed instrumentation of StoreInst. 1359 // This may not add new address checks. 1360 materializeStores(InstrumentWithCalls); 1361 1362 return true; 1363 } 1364 1365 /// Compute the shadow type that corresponds to a given Value. 1366 Type *getShadowTy(Value *V) { 1367 return getShadowTy(V->getType()); 1368 } 1369 1370 /// Compute the shadow type that corresponds to a given Type. 1371 Type *getShadowTy(Type *OrigTy) { 1372 if (!OrigTy->isSized()) { 1373 return nullptr; 1374 } 1375 // For integer type, shadow is the same as the original type. 1376 // This may return weird-sized types like i1. 1377 if (IntegerType *IT = dyn_cast<IntegerType>(OrigTy)) 1378 return IT; 1379 const DataLayout &DL = F.getParent()->getDataLayout(); 1380 if (VectorType *VT = dyn_cast<VectorType>(OrigTy)) { 1381 uint32_t EltSize = DL.getTypeSizeInBits(VT->getElementType()); 1382 return FixedVectorType::get(IntegerType::get(*MS.C, EltSize), 1383 cast<FixedVectorType>(VT)->getNumElements()); 1384 } 1385 if (ArrayType *AT = dyn_cast<ArrayType>(OrigTy)) { 1386 return ArrayType::get(getShadowTy(AT->getElementType()), 1387 AT->getNumElements()); 1388 } 1389 if (StructType *ST = dyn_cast<StructType>(OrigTy)) { 1390 SmallVector<Type*, 4> Elements; 1391 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1392 Elements.push_back(getShadowTy(ST->getElementType(i))); 1393 StructType *Res = StructType::get(*MS.C, Elements, ST->isPacked()); 1394 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST << " ===> " << *Res << "\n"); 1395 return Res; 1396 } 1397 uint32_t TypeSize = DL.getTypeSizeInBits(OrigTy); 1398 return IntegerType::get(*MS.C, TypeSize); 1399 } 1400 1401 /// Flatten a vector type. 1402 Type *getShadowTyNoVec(Type *ty) { 1403 if (VectorType *vt = dyn_cast<VectorType>(ty)) 1404 return IntegerType::get(*MS.C, 1405 vt->getPrimitiveSizeInBits().getFixedSize()); 1406 return ty; 1407 } 1408 1409 /// Extract combined shadow of struct elements as a bool 1410 Value *collapseStructShadow(StructType *Struct, Value *Shadow, 1411 IRBuilder<> &IRB) { 1412 Value *FalseVal = IRB.getIntN(/* width */ 1, /* value */ 0); 1413 Value *Aggregator = FalseVal; 1414 1415 for (unsigned Idx = 0; Idx < Struct->getNumElements(); Idx++) { 1416 // Combine by ORing together each element's bool shadow 1417 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1418 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1419 Value *ShadowBool = convertToBool(ShadowInner, IRB); 1420 1421 if (Aggregator != FalseVal) 1422 Aggregator = IRB.CreateOr(Aggregator, ShadowBool); 1423 else 1424 Aggregator = ShadowBool; 1425 } 1426 1427 return Aggregator; 1428 } 1429 1430 // Extract combined shadow of array elements 1431 Value *collapseArrayShadow(ArrayType *Array, Value *Shadow, 1432 IRBuilder<> &IRB) { 1433 if (!Array->getNumElements()) 1434 return IRB.getIntN(/* width */ 1, /* value */ 0); 1435 1436 Value *FirstItem = IRB.CreateExtractValue(Shadow, 0); 1437 Value *Aggregator = convertShadowToScalar(FirstItem, IRB); 1438 1439 for (unsigned Idx = 1; Idx < Array->getNumElements(); Idx++) { 1440 Value *ShadowItem = IRB.CreateExtractValue(Shadow, Idx); 1441 Value *ShadowInner = convertShadowToScalar(ShadowItem, IRB); 1442 Aggregator = IRB.CreateOr(Aggregator, ShadowInner); 1443 } 1444 return Aggregator; 1445 } 1446 1447 /// Convert a shadow value to it's flattened variant. The resulting 1448 /// shadow may not necessarily have the same bit width as the input 1449 /// value, but it will always be comparable to zero. 1450 Value *convertShadowToScalar(Value *V, IRBuilder<> &IRB) { 1451 if (StructType *Struct = dyn_cast<StructType>(V->getType())) 1452 return collapseStructShadow(Struct, V, IRB); 1453 if (ArrayType *Array = dyn_cast<ArrayType>(V->getType())) 1454 return collapseArrayShadow(Array, V, IRB); 1455 Type *Ty = V->getType(); 1456 Type *NoVecTy = getShadowTyNoVec(Ty); 1457 if (Ty == NoVecTy) return V; 1458 return IRB.CreateBitCast(V, NoVecTy); 1459 } 1460 1461 // Convert a scalar value to an i1 by comparing with 0 1462 Value *convertToBool(Value *V, IRBuilder<> &IRB, const Twine &name = "") { 1463 Type *VTy = V->getType(); 1464 assert(VTy->isIntegerTy()); 1465 if (VTy->getIntegerBitWidth() == 1) 1466 // Just converting a bool to a bool, so do nothing. 1467 return V; 1468 return IRB.CreateICmpNE(V, ConstantInt::get(VTy, 0), name); 1469 } 1470 1471 /// Compute the integer shadow offset that corresponds to a given 1472 /// application address. 1473 /// 1474 /// Offset = (Addr & ~AndMask) ^ XorMask 1475 Value *getShadowPtrOffset(Value *Addr, IRBuilder<> &IRB) { 1476 Value *OffsetLong = IRB.CreatePointerCast(Addr, MS.IntptrTy); 1477 1478 uint64_t AndMask = MS.MapParams->AndMask; 1479 if (AndMask) 1480 OffsetLong = 1481 IRB.CreateAnd(OffsetLong, ConstantInt::get(MS.IntptrTy, ~AndMask)); 1482 1483 uint64_t XorMask = MS.MapParams->XorMask; 1484 if (XorMask) 1485 OffsetLong = 1486 IRB.CreateXor(OffsetLong, ConstantInt::get(MS.IntptrTy, XorMask)); 1487 return OffsetLong; 1488 } 1489 1490 /// Compute the shadow and origin addresses corresponding to a given 1491 /// application address. 1492 /// 1493 /// Shadow = ShadowBase + Offset 1494 /// Origin = (OriginBase + Offset) & ~3ULL 1495 std::pair<Value *, Value *> 1496 getShadowOriginPtrUserspace(Value *Addr, IRBuilder<> &IRB, Type *ShadowTy, 1497 MaybeAlign Alignment) { 1498 Value *ShadowOffset = getShadowPtrOffset(Addr, IRB); 1499 Value *ShadowLong = ShadowOffset; 1500 uint64_t ShadowBase = MS.MapParams->ShadowBase; 1501 if (ShadowBase != 0) { 1502 ShadowLong = 1503 IRB.CreateAdd(ShadowLong, 1504 ConstantInt::get(MS.IntptrTy, ShadowBase)); 1505 } 1506 Value *ShadowPtr = 1507 IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0)); 1508 Value *OriginPtr = nullptr; 1509 if (MS.TrackOrigins) { 1510 Value *OriginLong = ShadowOffset; 1511 uint64_t OriginBase = MS.MapParams->OriginBase; 1512 if (OriginBase != 0) 1513 OriginLong = IRB.CreateAdd(OriginLong, 1514 ConstantInt::get(MS.IntptrTy, OriginBase)); 1515 if (!Alignment || *Alignment < kMinOriginAlignment) { 1516 uint64_t Mask = kMinOriginAlignment.value() - 1; 1517 OriginLong = 1518 IRB.CreateAnd(OriginLong, ConstantInt::get(MS.IntptrTy, ~Mask)); 1519 } 1520 OriginPtr = 1521 IRB.CreateIntToPtr(OriginLong, PointerType::get(MS.OriginTy, 0)); 1522 } 1523 return std::make_pair(ShadowPtr, OriginPtr); 1524 } 1525 1526 std::pair<Value *, Value *> getShadowOriginPtrKernel(Value *Addr, 1527 IRBuilder<> &IRB, 1528 Type *ShadowTy, 1529 bool isStore) { 1530 Value *ShadowOriginPtrs; 1531 const DataLayout &DL = F.getParent()->getDataLayout(); 1532 int Size = DL.getTypeStoreSize(ShadowTy); 1533 1534 FunctionCallee Getter = MS.getKmsanShadowOriginAccessFn(isStore, Size); 1535 Value *AddrCast = 1536 IRB.CreatePointerCast(Addr, PointerType::get(IRB.getInt8Ty(), 0)); 1537 if (Getter) { 1538 ShadowOriginPtrs = IRB.CreateCall(Getter, AddrCast); 1539 } else { 1540 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 1541 ShadowOriginPtrs = IRB.CreateCall(isStore ? MS.MsanMetadataPtrForStoreN 1542 : MS.MsanMetadataPtrForLoadN, 1543 {AddrCast, SizeVal}); 1544 } 1545 Value *ShadowPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 0); 1546 ShadowPtr = IRB.CreatePointerCast(ShadowPtr, PointerType::get(ShadowTy, 0)); 1547 Value *OriginPtr = IRB.CreateExtractValue(ShadowOriginPtrs, 1); 1548 1549 return std::make_pair(ShadowPtr, OriginPtr); 1550 } 1551 1552 std::pair<Value *, Value *> getShadowOriginPtr(Value *Addr, IRBuilder<> &IRB, 1553 Type *ShadowTy, 1554 MaybeAlign Alignment, 1555 bool isStore) { 1556 if (MS.CompileKernel) 1557 return getShadowOriginPtrKernel(Addr, IRB, ShadowTy, isStore); 1558 return getShadowOriginPtrUserspace(Addr, IRB, ShadowTy, Alignment); 1559 } 1560 1561 /// Compute the shadow address for a given function argument. 1562 /// 1563 /// Shadow = ParamTLS+ArgOffset. 1564 Value *getShadowPtrForArgument(Value *A, IRBuilder<> &IRB, 1565 int ArgOffset) { 1566 Value *Base = IRB.CreatePointerCast(MS.ParamTLS, MS.IntptrTy); 1567 if (ArgOffset) 1568 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1569 return IRB.CreateIntToPtr(Base, PointerType::get(getShadowTy(A), 0), 1570 "_msarg"); 1571 } 1572 1573 /// Compute the origin address for a given function argument. 1574 Value *getOriginPtrForArgument(Value *A, IRBuilder<> &IRB, 1575 int ArgOffset) { 1576 if (!MS.TrackOrigins) 1577 return nullptr; 1578 Value *Base = IRB.CreatePointerCast(MS.ParamOriginTLS, MS.IntptrTy); 1579 if (ArgOffset) 1580 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 1581 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 1582 "_msarg_o"); 1583 } 1584 1585 /// Compute the shadow address for a retval. 1586 Value *getShadowPtrForRetval(Value *A, IRBuilder<> &IRB) { 1587 return IRB.CreatePointerCast(MS.RetvalTLS, 1588 PointerType::get(getShadowTy(A), 0), 1589 "_msret"); 1590 } 1591 1592 /// Compute the origin address for a retval. 1593 Value *getOriginPtrForRetval(IRBuilder<> &IRB) { 1594 // We keep a single origin for the entire retval. Might be too optimistic. 1595 return MS.RetvalOriginTLS; 1596 } 1597 1598 /// Set SV to be the shadow value for V. 1599 void setShadow(Value *V, Value *SV) { 1600 assert(!ShadowMap.count(V) && "Values may only have one shadow"); 1601 ShadowMap[V] = PropagateShadow ? SV : getCleanShadow(V); 1602 } 1603 1604 /// Set Origin to be the origin value for V. 1605 void setOrigin(Value *V, Value *Origin) { 1606 if (!MS.TrackOrigins) return; 1607 assert(!OriginMap.count(V) && "Values may only have one origin"); 1608 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V << " ==> " << *Origin << "\n"); 1609 OriginMap[V] = Origin; 1610 } 1611 1612 Constant *getCleanShadow(Type *OrigTy) { 1613 Type *ShadowTy = getShadowTy(OrigTy); 1614 if (!ShadowTy) 1615 return nullptr; 1616 return Constant::getNullValue(ShadowTy); 1617 } 1618 1619 /// Create a clean shadow value for a given value. 1620 /// 1621 /// Clean shadow (all zeroes) means all bits of the value are defined 1622 /// (initialized). 1623 Constant *getCleanShadow(Value *V) { 1624 return getCleanShadow(V->getType()); 1625 } 1626 1627 /// Create a dirty shadow of a given shadow type. 1628 Constant *getPoisonedShadow(Type *ShadowTy) { 1629 assert(ShadowTy); 1630 if (isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy)) 1631 return Constant::getAllOnesValue(ShadowTy); 1632 if (ArrayType *AT = dyn_cast<ArrayType>(ShadowTy)) { 1633 SmallVector<Constant *, 4> Vals(AT->getNumElements(), 1634 getPoisonedShadow(AT->getElementType())); 1635 return ConstantArray::get(AT, Vals); 1636 } 1637 if (StructType *ST = dyn_cast<StructType>(ShadowTy)) { 1638 SmallVector<Constant *, 4> Vals; 1639 for (unsigned i = 0, n = ST->getNumElements(); i < n; i++) 1640 Vals.push_back(getPoisonedShadow(ST->getElementType(i))); 1641 return ConstantStruct::get(ST, Vals); 1642 } 1643 llvm_unreachable("Unexpected shadow type"); 1644 } 1645 1646 /// Create a dirty shadow for a given value. 1647 Constant *getPoisonedShadow(Value *V) { 1648 Type *ShadowTy = getShadowTy(V); 1649 if (!ShadowTy) 1650 return nullptr; 1651 return getPoisonedShadow(ShadowTy); 1652 } 1653 1654 /// Create a clean (zero) origin. 1655 Value *getCleanOrigin() { 1656 return Constant::getNullValue(MS.OriginTy); 1657 } 1658 1659 /// Get the shadow value for a given Value. 1660 /// 1661 /// This function either returns the value set earlier with setShadow, 1662 /// or extracts if from ParamTLS (for function arguments). 1663 Value *getShadow(Value *V) { 1664 if (!PropagateShadow) return getCleanShadow(V); 1665 if (Instruction *I = dyn_cast<Instruction>(V)) { 1666 if (I->getMetadata("nosanitize")) 1667 return getCleanShadow(V); 1668 // For instructions the shadow is already stored in the map. 1669 Value *Shadow = ShadowMap[V]; 1670 if (!Shadow) { 1671 LLVM_DEBUG(dbgs() << "No shadow: " << *V << "\n" << *(I->getParent())); 1672 (void)I; 1673 assert(Shadow && "No shadow for a value"); 1674 } 1675 return Shadow; 1676 } 1677 if (UndefValue *U = dyn_cast<UndefValue>(V)) { 1678 Value *AllOnes = PoisonUndef ? getPoisonedShadow(V) : getCleanShadow(V); 1679 LLVM_DEBUG(dbgs() << "Undef: " << *U << " ==> " << *AllOnes << "\n"); 1680 (void)U; 1681 return AllOnes; 1682 } 1683 if (Argument *A = dyn_cast<Argument>(V)) { 1684 // For arguments we compute the shadow on demand and store it in the map. 1685 Value **ShadowPtr = &ShadowMap[V]; 1686 if (*ShadowPtr) 1687 return *ShadowPtr; 1688 Function *F = A->getParent(); 1689 IRBuilder<> EntryIRB(FnPrologueEnd); 1690 unsigned ArgOffset = 0; 1691 const DataLayout &DL = F->getParent()->getDataLayout(); 1692 for (auto &FArg : F->args()) { 1693 if (!FArg.getType()->isSized()) { 1694 LLVM_DEBUG(dbgs() << "Arg is not sized\n"); 1695 continue; 1696 } 1697 1698 bool FArgByVal = FArg.hasByValAttr(); 1699 bool FArgNoUndef = FArg.hasAttribute(Attribute::NoUndef); 1700 bool FArgEagerCheck = ClEagerChecks && !FArgByVal && FArgNoUndef; 1701 unsigned Size = 1702 FArg.hasByValAttr() 1703 ? DL.getTypeAllocSize(FArg.getParamByValType()) 1704 : DL.getTypeAllocSize(FArg.getType()); 1705 1706 if (A == &FArg) { 1707 bool Overflow = ArgOffset + Size > kParamTLSSize; 1708 if (FArgEagerCheck) { 1709 *ShadowPtr = getCleanShadow(V); 1710 setOrigin(A, getCleanOrigin()); 1711 break; 1712 } else if (FArgByVal) { 1713 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1714 // ByVal pointer itself has clean shadow. We copy the actual 1715 // argument shadow to the underlying memory. 1716 // Figure out maximal valid memcpy alignment. 1717 const Align ArgAlign = DL.getValueOrABITypeAlignment( 1718 MaybeAlign(FArg.getParamAlignment()), FArg.getParamByValType()); 1719 Value *CpShadowPtr = 1720 getShadowOriginPtr(V, EntryIRB, EntryIRB.getInt8Ty(), ArgAlign, 1721 /*isStore*/ true) 1722 .first; 1723 // TODO(glider): need to copy origins. 1724 if (Overflow) { 1725 // ParamTLS overflow. 1726 EntryIRB.CreateMemSet( 1727 CpShadowPtr, Constant::getNullValue(EntryIRB.getInt8Ty()), 1728 Size, ArgAlign); 1729 } else { 1730 const Align CopyAlign = std::min(ArgAlign, kShadowTLSAlignment); 1731 Value *Cpy = EntryIRB.CreateMemCpy(CpShadowPtr, CopyAlign, Base, 1732 CopyAlign, Size); 1733 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy << "\n"); 1734 (void)Cpy; 1735 } 1736 *ShadowPtr = getCleanShadow(V); 1737 } else { 1738 // Shadow over TLS 1739 Value *Base = getShadowPtrForArgument(&FArg, EntryIRB, ArgOffset); 1740 if (Overflow) { 1741 // ParamTLS overflow. 1742 *ShadowPtr = getCleanShadow(V); 1743 } else { 1744 *ShadowPtr = EntryIRB.CreateAlignedLoad(getShadowTy(&FArg), Base, 1745 kShadowTLSAlignment); 1746 } 1747 } 1748 LLVM_DEBUG(dbgs() 1749 << " ARG: " << FArg << " ==> " << **ShadowPtr << "\n"); 1750 if (MS.TrackOrigins && !Overflow) { 1751 Value *OriginPtr = 1752 getOriginPtrForArgument(&FArg, EntryIRB, ArgOffset); 1753 setOrigin(A, EntryIRB.CreateLoad(MS.OriginTy, OriginPtr)); 1754 } else { 1755 setOrigin(A, getCleanOrigin()); 1756 } 1757 1758 break; 1759 } 1760 1761 ArgOffset += alignTo(Size, kShadowTLSAlignment); 1762 } 1763 assert(*ShadowPtr && "Could not find shadow for an argument"); 1764 return *ShadowPtr; 1765 } 1766 // For everything else the shadow is zero. 1767 return getCleanShadow(V); 1768 } 1769 1770 /// Get the shadow for i-th argument of the instruction I. 1771 Value *getShadow(Instruction *I, int i) { 1772 return getShadow(I->getOperand(i)); 1773 } 1774 1775 /// Get the origin for a value. 1776 Value *getOrigin(Value *V) { 1777 if (!MS.TrackOrigins) return nullptr; 1778 if (!PropagateShadow) return getCleanOrigin(); 1779 if (isa<Constant>(V)) return getCleanOrigin(); 1780 assert((isa<Instruction>(V) || isa<Argument>(V)) && 1781 "Unexpected value type in getOrigin()"); 1782 if (Instruction *I = dyn_cast<Instruction>(V)) { 1783 if (I->getMetadata("nosanitize")) 1784 return getCleanOrigin(); 1785 } 1786 Value *Origin = OriginMap[V]; 1787 assert(Origin && "Missing origin"); 1788 return Origin; 1789 } 1790 1791 /// Get the origin for i-th argument of the instruction I. 1792 Value *getOrigin(Instruction *I, int i) { 1793 return getOrigin(I->getOperand(i)); 1794 } 1795 1796 /// Remember the place where a shadow check should be inserted. 1797 /// 1798 /// This location will be later instrumented with a check that will print a 1799 /// UMR warning in runtime if the shadow value is not 0. 1800 void insertShadowCheck(Value *Shadow, Value *Origin, Instruction *OrigIns) { 1801 assert(Shadow); 1802 if (!InsertChecks) return; 1803 #ifndef NDEBUG 1804 Type *ShadowTy = Shadow->getType(); 1805 assert((isa<IntegerType>(ShadowTy) || isa<VectorType>(ShadowTy) || 1806 isa<StructType>(ShadowTy) || isa<ArrayType>(ShadowTy)) && 1807 "Can only insert checks for integer, vector, and aggregate shadow " 1808 "types"); 1809 #endif 1810 InstrumentationList.push_back( 1811 ShadowOriginAndInsertPoint(Shadow, Origin, OrigIns)); 1812 } 1813 1814 /// Remember the place where a shadow check should be inserted. 1815 /// 1816 /// This location will be later instrumented with a check that will print a 1817 /// UMR warning in runtime if the value is not fully defined. 1818 void insertShadowCheck(Value *Val, Instruction *OrigIns) { 1819 assert(Val); 1820 Value *Shadow, *Origin; 1821 if (ClCheckConstantShadow) { 1822 Shadow = getShadow(Val); 1823 if (!Shadow) return; 1824 Origin = getOrigin(Val); 1825 } else { 1826 Shadow = dyn_cast_or_null<Instruction>(getShadow(Val)); 1827 if (!Shadow) return; 1828 Origin = dyn_cast_or_null<Instruction>(getOrigin(Val)); 1829 } 1830 insertShadowCheck(Shadow, Origin, OrigIns); 1831 } 1832 1833 AtomicOrdering addReleaseOrdering(AtomicOrdering a) { 1834 switch (a) { 1835 case AtomicOrdering::NotAtomic: 1836 return AtomicOrdering::NotAtomic; 1837 case AtomicOrdering::Unordered: 1838 case AtomicOrdering::Monotonic: 1839 case AtomicOrdering::Release: 1840 return AtomicOrdering::Release; 1841 case AtomicOrdering::Acquire: 1842 case AtomicOrdering::AcquireRelease: 1843 return AtomicOrdering::AcquireRelease; 1844 case AtomicOrdering::SequentiallyConsistent: 1845 return AtomicOrdering::SequentiallyConsistent; 1846 } 1847 llvm_unreachable("Unknown ordering"); 1848 } 1849 1850 Value *makeAddReleaseOrderingTable(IRBuilder<> &IRB) { 1851 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1852 uint32_t OrderingTable[NumOrderings] = {}; 1853 1854 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1855 OrderingTable[(int)AtomicOrderingCABI::release] = 1856 (int)AtomicOrderingCABI::release; 1857 OrderingTable[(int)AtomicOrderingCABI::consume] = 1858 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1859 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1860 (int)AtomicOrderingCABI::acq_rel; 1861 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1862 (int)AtomicOrderingCABI::seq_cst; 1863 1864 return ConstantDataVector::get(IRB.getContext(), 1865 makeArrayRef(OrderingTable, NumOrderings)); 1866 } 1867 1868 AtomicOrdering addAcquireOrdering(AtomicOrdering a) { 1869 switch (a) { 1870 case AtomicOrdering::NotAtomic: 1871 return AtomicOrdering::NotAtomic; 1872 case AtomicOrdering::Unordered: 1873 case AtomicOrdering::Monotonic: 1874 case AtomicOrdering::Acquire: 1875 return AtomicOrdering::Acquire; 1876 case AtomicOrdering::Release: 1877 case AtomicOrdering::AcquireRelease: 1878 return AtomicOrdering::AcquireRelease; 1879 case AtomicOrdering::SequentiallyConsistent: 1880 return AtomicOrdering::SequentiallyConsistent; 1881 } 1882 llvm_unreachable("Unknown ordering"); 1883 } 1884 1885 Value *makeAddAcquireOrderingTable(IRBuilder<> &IRB) { 1886 constexpr int NumOrderings = (int)AtomicOrderingCABI::seq_cst + 1; 1887 uint32_t OrderingTable[NumOrderings] = {}; 1888 1889 OrderingTable[(int)AtomicOrderingCABI::relaxed] = 1890 OrderingTable[(int)AtomicOrderingCABI::acquire] = 1891 OrderingTable[(int)AtomicOrderingCABI::consume] = 1892 (int)AtomicOrderingCABI::acquire; 1893 OrderingTable[(int)AtomicOrderingCABI::release] = 1894 OrderingTable[(int)AtomicOrderingCABI::acq_rel] = 1895 (int)AtomicOrderingCABI::acq_rel; 1896 OrderingTable[(int)AtomicOrderingCABI::seq_cst] = 1897 (int)AtomicOrderingCABI::seq_cst; 1898 1899 return ConstantDataVector::get(IRB.getContext(), 1900 makeArrayRef(OrderingTable, NumOrderings)); 1901 } 1902 1903 // ------------------- Visitors. 1904 using InstVisitor<MemorySanitizerVisitor>::visit; 1905 void visit(Instruction &I) { 1906 if (I.getMetadata("nosanitize")) 1907 return; 1908 // Don't want to visit if we're in the prologue 1909 if (isInPrologue(I)) 1910 return; 1911 InstVisitor<MemorySanitizerVisitor>::visit(I); 1912 } 1913 1914 /// Instrument LoadInst 1915 /// 1916 /// Loads the corresponding shadow and (optionally) origin. 1917 /// Optionally, checks that the load address is fully defined. 1918 void visitLoadInst(LoadInst &I) { 1919 assert(I.getType()->isSized() && "Load type must have size"); 1920 assert(!I.getMetadata("nosanitize")); 1921 IRBuilder<> IRB(I.getNextNode()); 1922 Type *ShadowTy = getShadowTy(&I); 1923 Value *Addr = I.getPointerOperand(); 1924 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 1925 const Align Alignment = assumeAligned(I.getAlignment()); 1926 if (PropagateShadow) { 1927 std::tie(ShadowPtr, OriginPtr) = 1928 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 1929 setShadow(&I, 1930 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 1931 } else { 1932 setShadow(&I, getCleanShadow(&I)); 1933 } 1934 1935 if (ClCheckAccessAddress) 1936 insertShadowCheck(I.getPointerOperand(), &I); 1937 1938 if (I.isAtomic()) 1939 I.setOrdering(addAcquireOrdering(I.getOrdering())); 1940 1941 if (MS.TrackOrigins) { 1942 if (PropagateShadow) { 1943 const Align OriginAlignment = std::max(kMinOriginAlignment, Alignment); 1944 setOrigin( 1945 &I, IRB.CreateAlignedLoad(MS.OriginTy, OriginPtr, OriginAlignment)); 1946 } else { 1947 setOrigin(&I, getCleanOrigin()); 1948 } 1949 } 1950 } 1951 1952 /// Instrument StoreInst 1953 /// 1954 /// Stores the corresponding shadow and (optionally) origin. 1955 /// Optionally, checks that the store address is fully defined. 1956 void visitStoreInst(StoreInst &I) { 1957 StoreList.push_back(&I); 1958 if (ClCheckAccessAddress) 1959 insertShadowCheck(I.getPointerOperand(), &I); 1960 } 1961 1962 void handleCASOrRMW(Instruction &I) { 1963 assert(isa<AtomicRMWInst>(I) || isa<AtomicCmpXchgInst>(I)); 1964 1965 IRBuilder<> IRB(&I); 1966 Value *Addr = I.getOperand(0); 1967 Value *Val = I.getOperand(1); 1968 Value *ShadowPtr = getShadowOriginPtr(Addr, IRB, Val->getType(), Align(1), 1969 /*isStore*/ true) 1970 .first; 1971 1972 if (ClCheckAccessAddress) 1973 insertShadowCheck(Addr, &I); 1974 1975 // Only test the conditional argument of cmpxchg instruction. 1976 // The other argument can potentially be uninitialized, but we can not 1977 // detect this situation reliably without possible false positives. 1978 if (isa<AtomicCmpXchgInst>(I)) 1979 insertShadowCheck(Val, &I); 1980 1981 IRB.CreateStore(getCleanShadow(Val), ShadowPtr); 1982 1983 setShadow(&I, getCleanShadow(&I)); 1984 setOrigin(&I, getCleanOrigin()); 1985 } 1986 1987 void visitAtomicRMWInst(AtomicRMWInst &I) { 1988 handleCASOrRMW(I); 1989 I.setOrdering(addReleaseOrdering(I.getOrdering())); 1990 } 1991 1992 void visitAtomicCmpXchgInst(AtomicCmpXchgInst &I) { 1993 handleCASOrRMW(I); 1994 I.setSuccessOrdering(addReleaseOrdering(I.getSuccessOrdering())); 1995 } 1996 1997 // Vector manipulation. 1998 void visitExtractElementInst(ExtractElementInst &I) { 1999 insertShadowCheck(I.getOperand(1), &I); 2000 IRBuilder<> IRB(&I); 2001 setShadow(&I, IRB.CreateExtractElement(getShadow(&I, 0), I.getOperand(1), 2002 "_msprop")); 2003 setOrigin(&I, getOrigin(&I, 0)); 2004 } 2005 2006 void visitInsertElementInst(InsertElementInst &I) { 2007 insertShadowCheck(I.getOperand(2), &I); 2008 IRBuilder<> IRB(&I); 2009 setShadow(&I, IRB.CreateInsertElement(getShadow(&I, 0), getShadow(&I, 1), 2010 I.getOperand(2), "_msprop")); 2011 setOriginForNaryOp(I); 2012 } 2013 2014 void visitShuffleVectorInst(ShuffleVectorInst &I) { 2015 IRBuilder<> IRB(&I); 2016 setShadow(&I, IRB.CreateShuffleVector(getShadow(&I, 0), getShadow(&I, 1), 2017 I.getShuffleMask(), "_msprop")); 2018 setOriginForNaryOp(I); 2019 } 2020 2021 // Casts. 2022 void visitSExtInst(SExtInst &I) { 2023 IRBuilder<> IRB(&I); 2024 setShadow(&I, IRB.CreateSExt(getShadow(&I, 0), I.getType(), "_msprop")); 2025 setOrigin(&I, getOrigin(&I, 0)); 2026 } 2027 2028 void visitZExtInst(ZExtInst &I) { 2029 IRBuilder<> IRB(&I); 2030 setShadow(&I, IRB.CreateZExt(getShadow(&I, 0), I.getType(), "_msprop")); 2031 setOrigin(&I, getOrigin(&I, 0)); 2032 } 2033 2034 void visitTruncInst(TruncInst &I) { 2035 IRBuilder<> IRB(&I); 2036 setShadow(&I, IRB.CreateTrunc(getShadow(&I, 0), I.getType(), "_msprop")); 2037 setOrigin(&I, getOrigin(&I, 0)); 2038 } 2039 2040 void visitBitCastInst(BitCastInst &I) { 2041 // Special case: if this is the bitcast (there is exactly 1 allowed) between 2042 // a musttail call and a ret, don't instrument. New instructions are not 2043 // allowed after a musttail call. 2044 if (auto *CI = dyn_cast<CallInst>(I.getOperand(0))) 2045 if (CI->isMustTailCall()) 2046 return; 2047 IRBuilder<> IRB(&I); 2048 setShadow(&I, IRB.CreateBitCast(getShadow(&I, 0), getShadowTy(&I))); 2049 setOrigin(&I, getOrigin(&I, 0)); 2050 } 2051 2052 void visitPtrToIntInst(PtrToIntInst &I) { 2053 IRBuilder<> IRB(&I); 2054 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2055 "_msprop_ptrtoint")); 2056 setOrigin(&I, getOrigin(&I, 0)); 2057 } 2058 2059 void visitIntToPtrInst(IntToPtrInst &I) { 2060 IRBuilder<> IRB(&I); 2061 setShadow(&I, IRB.CreateIntCast(getShadow(&I, 0), getShadowTy(&I), false, 2062 "_msprop_inttoptr")); 2063 setOrigin(&I, getOrigin(&I, 0)); 2064 } 2065 2066 void visitFPToSIInst(CastInst& I) { handleShadowOr(I); } 2067 void visitFPToUIInst(CastInst& I) { handleShadowOr(I); } 2068 void visitSIToFPInst(CastInst& I) { handleShadowOr(I); } 2069 void visitUIToFPInst(CastInst& I) { handleShadowOr(I); } 2070 void visitFPExtInst(CastInst& I) { handleShadowOr(I); } 2071 void visitFPTruncInst(CastInst& I) { handleShadowOr(I); } 2072 2073 /// Propagate shadow for bitwise AND. 2074 /// 2075 /// This code is exact, i.e. if, for example, a bit in the left argument 2076 /// is defined and 0, then neither the value not definedness of the 2077 /// corresponding bit in B don't affect the resulting shadow. 2078 void visitAnd(BinaryOperator &I) { 2079 IRBuilder<> IRB(&I); 2080 // "And" of 0 and a poisoned value results in unpoisoned value. 2081 // 1&1 => 1; 0&1 => 0; p&1 => p; 2082 // 1&0 => 0; 0&0 => 0; p&0 => 0; 2083 // 1&p => p; 0&p => 0; p&p => p; 2084 // S = (S1 & S2) | (V1 & S2) | (S1 & V2) 2085 Value *S1 = getShadow(&I, 0); 2086 Value *S2 = getShadow(&I, 1); 2087 Value *V1 = I.getOperand(0); 2088 Value *V2 = I.getOperand(1); 2089 if (V1->getType() != S1->getType()) { 2090 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2091 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2092 } 2093 Value *S1S2 = IRB.CreateAnd(S1, S2); 2094 Value *V1S2 = IRB.CreateAnd(V1, S2); 2095 Value *S1V2 = IRB.CreateAnd(S1, V2); 2096 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2097 setOriginForNaryOp(I); 2098 } 2099 2100 void visitOr(BinaryOperator &I) { 2101 IRBuilder<> IRB(&I); 2102 // "Or" of 1 and a poisoned value results in unpoisoned value. 2103 // 1|1 => 1; 0|1 => 1; p|1 => 1; 2104 // 1|0 => 1; 0|0 => 0; p|0 => p; 2105 // 1|p => 1; 0|p => p; p|p => p; 2106 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2) 2107 Value *S1 = getShadow(&I, 0); 2108 Value *S2 = getShadow(&I, 1); 2109 Value *V1 = IRB.CreateNot(I.getOperand(0)); 2110 Value *V2 = IRB.CreateNot(I.getOperand(1)); 2111 if (V1->getType() != S1->getType()) { 2112 V1 = IRB.CreateIntCast(V1, S1->getType(), false); 2113 V2 = IRB.CreateIntCast(V2, S2->getType(), false); 2114 } 2115 Value *S1S2 = IRB.CreateAnd(S1, S2); 2116 Value *V1S2 = IRB.CreateAnd(V1, S2); 2117 Value *S1V2 = IRB.CreateAnd(S1, V2); 2118 setShadow(&I, IRB.CreateOr({S1S2, V1S2, S1V2})); 2119 setOriginForNaryOp(I); 2120 } 2121 2122 /// Default propagation of shadow and/or origin. 2123 /// 2124 /// This class implements the general case of shadow propagation, used in all 2125 /// cases where we don't know and/or don't care about what the operation 2126 /// actually does. It converts all input shadow values to a common type 2127 /// (extending or truncating as necessary), and bitwise OR's them. 2128 /// 2129 /// This is much cheaper than inserting checks (i.e. requiring inputs to be 2130 /// fully initialized), and less prone to false positives. 2131 /// 2132 /// This class also implements the general case of origin propagation. For a 2133 /// Nary operation, result origin is set to the origin of an argument that is 2134 /// not entirely initialized. If there is more than one such arguments, the 2135 /// rightmost of them is picked. It does not matter which one is picked if all 2136 /// arguments are initialized. 2137 template <bool CombineShadow> 2138 class Combiner { 2139 Value *Shadow = nullptr; 2140 Value *Origin = nullptr; 2141 IRBuilder<> &IRB; 2142 MemorySanitizerVisitor *MSV; 2143 2144 public: 2145 Combiner(MemorySanitizerVisitor *MSV, IRBuilder<> &IRB) 2146 : IRB(IRB), MSV(MSV) {} 2147 2148 /// Add a pair of shadow and origin values to the mix. 2149 Combiner &Add(Value *OpShadow, Value *OpOrigin) { 2150 if (CombineShadow) { 2151 assert(OpShadow); 2152 if (!Shadow) 2153 Shadow = OpShadow; 2154 else { 2155 OpShadow = MSV->CreateShadowCast(IRB, OpShadow, Shadow->getType()); 2156 Shadow = IRB.CreateOr(Shadow, OpShadow, "_msprop"); 2157 } 2158 } 2159 2160 if (MSV->MS.TrackOrigins) { 2161 assert(OpOrigin); 2162 if (!Origin) { 2163 Origin = OpOrigin; 2164 } else { 2165 Constant *ConstOrigin = dyn_cast<Constant>(OpOrigin); 2166 // No point in adding something that might result in 0 origin value. 2167 if (!ConstOrigin || !ConstOrigin->isNullValue()) { 2168 Value *FlatShadow = MSV->convertShadowToScalar(OpShadow, IRB); 2169 Value *Cond = 2170 IRB.CreateICmpNE(FlatShadow, MSV->getCleanShadow(FlatShadow)); 2171 Origin = IRB.CreateSelect(Cond, OpOrigin, Origin); 2172 } 2173 } 2174 } 2175 return *this; 2176 } 2177 2178 /// Add an application value to the mix. 2179 Combiner &Add(Value *V) { 2180 Value *OpShadow = MSV->getShadow(V); 2181 Value *OpOrigin = MSV->MS.TrackOrigins ? MSV->getOrigin(V) : nullptr; 2182 return Add(OpShadow, OpOrigin); 2183 } 2184 2185 /// Set the current combined values as the given instruction's shadow 2186 /// and origin. 2187 void Done(Instruction *I) { 2188 if (CombineShadow) { 2189 assert(Shadow); 2190 Shadow = MSV->CreateShadowCast(IRB, Shadow, MSV->getShadowTy(I)); 2191 MSV->setShadow(I, Shadow); 2192 } 2193 if (MSV->MS.TrackOrigins) { 2194 assert(Origin); 2195 MSV->setOrigin(I, Origin); 2196 } 2197 } 2198 }; 2199 2200 using ShadowAndOriginCombiner = Combiner<true>; 2201 using OriginCombiner = Combiner<false>; 2202 2203 /// Propagate origin for arbitrary operation. 2204 void setOriginForNaryOp(Instruction &I) { 2205 if (!MS.TrackOrigins) return; 2206 IRBuilder<> IRB(&I); 2207 OriginCombiner OC(this, IRB); 2208 for (Use &Op : I.operands()) 2209 OC.Add(Op.get()); 2210 OC.Done(&I); 2211 } 2212 2213 size_t VectorOrPrimitiveTypeSizeInBits(Type *Ty) { 2214 assert(!(Ty->isVectorTy() && Ty->getScalarType()->isPointerTy()) && 2215 "Vector of pointers is not a valid shadow type"); 2216 return Ty->isVectorTy() ? cast<FixedVectorType>(Ty)->getNumElements() * 2217 Ty->getScalarSizeInBits() 2218 : Ty->getPrimitiveSizeInBits(); 2219 } 2220 2221 /// Cast between two shadow types, extending or truncating as 2222 /// necessary. 2223 Value *CreateShadowCast(IRBuilder<> &IRB, Value *V, Type *dstTy, 2224 bool Signed = false) { 2225 Type *srcTy = V->getType(); 2226 size_t srcSizeInBits = VectorOrPrimitiveTypeSizeInBits(srcTy); 2227 size_t dstSizeInBits = VectorOrPrimitiveTypeSizeInBits(dstTy); 2228 if (srcSizeInBits > 1 && dstSizeInBits == 1) 2229 return IRB.CreateICmpNE(V, getCleanShadow(V)); 2230 2231 if (dstTy->isIntegerTy() && srcTy->isIntegerTy()) 2232 return IRB.CreateIntCast(V, dstTy, Signed); 2233 if (dstTy->isVectorTy() && srcTy->isVectorTy() && 2234 cast<FixedVectorType>(dstTy)->getNumElements() == 2235 cast<FixedVectorType>(srcTy)->getNumElements()) 2236 return IRB.CreateIntCast(V, dstTy, Signed); 2237 Value *V1 = IRB.CreateBitCast(V, Type::getIntNTy(*MS.C, srcSizeInBits)); 2238 Value *V2 = 2239 IRB.CreateIntCast(V1, Type::getIntNTy(*MS.C, dstSizeInBits), Signed); 2240 return IRB.CreateBitCast(V2, dstTy); 2241 // TODO: handle struct types. 2242 } 2243 2244 /// Cast an application value to the type of its own shadow. 2245 Value *CreateAppToShadowCast(IRBuilder<> &IRB, Value *V) { 2246 Type *ShadowTy = getShadowTy(V); 2247 if (V->getType() == ShadowTy) 2248 return V; 2249 if (V->getType()->isPtrOrPtrVectorTy()) 2250 return IRB.CreatePtrToInt(V, ShadowTy); 2251 else 2252 return IRB.CreateBitCast(V, ShadowTy); 2253 } 2254 2255 /// Propagate shadow for arbitrary operation. 2256 void handleShadowOr(Instruction &I) { 2257 IRBuilder<> IRB(&I); 2258 ShadowAndOriginCombiner SC(this, IRB); 2259 for (Use &Op : I.operands()) 2260 SC.Add(Op.get()); 2261 SC.Done(&I); 2262 } 2263 2264 void visitFNeg(UnaryOperator &I) { handleShadowOr(I); } 2265 2266 // Handle multiplication by constant. 2267 // 2268 // Handle a special case of multiplication by constant that may have one or 2269 // more zeros in the lower bits. This makes corresponding number of lower bits 2270 // of the result zero as well. We model it by shifting the other operand 2271 // shadow left by the required number of bits. Effectively, we transform 2272 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B). 2273 // We use multiplication by 2**N instead of shift to cover the case of 2274 // multiplication by 0, which may occur in some elements of a vector operand. 2275 void handleMulByConstant(BinaryOperator &I, Constant *ConstArg, 2276 Value *OtherArg) { 2277 Constant *ShadowMul; 2278 Type *Ty = ConstArg->getType(); 2279 if (auto *VTy = dyn_cast<VectorType>(Ty)) { 2280 unsigned NumElements = cast<FixedVectorType>(VTy)->getNumElements(); 2281 Type *EltTy = VTy->getElementType(); 2282 SmallVector<Constant *, 16> Elements; 2283 for (unsigned Idx = 0; Idx < NumElements; ++Idx) { 2284 if (ConstantInt *Elt = 2285 dyn_cast<ConstantInt>(ConstArg->getAggregateElement(Idx))) { 2286 const APInt &V = Elt->getValue(); 2287 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2288 Elements.push_back(ConstantInt::get(EltTy, V2)); 2289 } else { 2290 Elements.push_back(ConstantInt::get(EltTy, 1)); 2291 } 2292 } 2293 ShadowMul = ConstantVector::get(Elements); 2294 } else { 2295 if (ConstantInt *Elt = dyn_cast<ConstantInt>(ConstArg)) { 2296 const APInt &V = Elt->getValue(); 2297 APInt V2 = APInt(V.getBitWidth(), 1) << V.countTrailingZeros(); 2298 ShadowMul = ConstantInt::get(Ty, V2); 2299 } else { 2300 ShadowMul = ConstantInt::get(Ty, 1); 2301 } 2302 } 2303 2304 IRBuilder<> IRB(&I); 2305 setShadow(&I, 2306 IRB.CreateMul(getShadow(OtherArg), ShadowMul, "msprop_mul_cst")); 2307 setOrigin(&I, getOrigin(OtherArg)); 2308 } 2309 2310 void visitMul(BinaryOperator &I) { 2311 Constant *constOp0 = dyn_cast<Constant>(I.getOperand(0)); 2312 Constant *constOp1 = dyn_cast<Constant>(I.getOperand(1)); 2313 if (constOp0 && !constOp1) 2314 handleMulByConstant(I, constOp0, I.getOperand(1)); 2315 else if (constOp1 && !constOp0) 2316 handleMulByConstant(I, constOp1, I.getOperand(0)); 2317 else 2318 handleShadowOr(I); 2319 } 2320 2321 void visitFAdd(BinaryOperator &I) { handleShadowOr(I); } 2322 void visitFSub(BinaryOperator &I) { handleShadowOr(I); } 2323 void visitFMul(BinaryOperator &I) { handleShadowOr(I); } 2324 void visitAdd(BinaryOperator &I) { handleShadowOr(I); } 2325 void visitSub(BinaryOperator &I) { handleShadowOr(I); } 2326 void visitXor(BinaryOperator &I) { handleShadowOr(I); } 2327 2328 void handleIntegerDiv(Instruction &I) { 2329 IRBuilder<> IRB(&I); 2330 // Strict on the second argument. 2331 insertShadowCheck(I.getOperand(1), &I); 2332 setShadow(&I, getShadow(&I, 0)); 2333 setOrigin(&I, getOrigin(&I, 0)); 2334 } 2335 2336 void visitUDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2337 void visitSDiv(BinaryOperator &I) { handleIntegerDiv(I); } 2338 void visitURem(BinaryOperator &I) { handleIntegerDiv(I); } 2339 void visitSRem(BinaryOperator &I) { handleIntegerDiv(I); } 2340 2341 // Floating point division is side-effect free. We can not require that the 2342 // divisor is fully initialized and must propagate shadow. See PR37523. 2343 void visitFDiv(BinaryOperator &I) { handleShadowOr(I); } 2344 void visitFRem(BinaryOperator &I) { handleShadowOr(I); } 2345 2346 /// Instrument == and != comparisons. 2347 /// 2348 /// Sometimes the comparison result is known even if some of the bits of the 2349 /// arguments are not. 2350 void handleEqualityComparison(ICmpInst &I) { 2351 IRBuilder<> IRB(&I); 2352 Value *A = I.getOperand(0); 2353 Value *B = I.getOperand(1); 2354 Value *Sa = getShadow(A); 2355 Value *Sb = getShadow(B); 2356 2357 // Get rid of pointers and vectors of pointers. 2358 // For ints (and vectors of ints), types of A and Sa match, 2359 // and this is a no-op. 2360 A = IRB.CreatePointerCast(A, Sa->getType()); 2361 B = IRB.CreatePointerCast(B, Sb->getType()); 2362 2363 // A == B <==> (C = A^B) == 0 2364 // A != B <==> (C = A^B) != 0 2365 // Sc = Sa | Sb 2366 Value *C = IRB.CreateXor(A, B); 2367 Value *Sc = IRB.CreateOr(Sa, Sb); 2368 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now) 2369 // Result is defined if one of the following is true 2370 // * there is a defined 1 bit in C 2371 // * C is fully defined 2372 // Si = !(C & ~Sc) && Sc 2373 Value *Zero = Constant::getNullValue(Sc->getType()); 2374 Value *MinusOne = Constant::getAllOnesValue(Sc->getType()); 2375 Value *Si = 2376 IRB.CreateAnd(IRB.CreateICmpNE(Sc, Zero), 2377 IRB.CreateICmpEQ( 2378 IRB.CreateAnd(IRB.CreateXor(Sc, MinusOne), C), Zero)); 2379 Si->setName("_msprop_icmp"); 2380 setShadow(&I, Si); 2381 setOriginForNaryOp(I); 2382 } 2383 2384 /// Build the lowest possible value of V, taking into account V's 2385 /// uninitialized bits. 2386 Value *getLowestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2387 bool isSigned) { 2388 if (isSigned) { 2389 // Split shadow into sign bit and other bits. 2390 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2391 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2392 // Maximise the undefined shadow bit, minimize other undefined bits. 2393 return 2394 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaOtherBits)), SaSignBit); 2395 } else { 2396 // Minimize undefined bits. 2397 return IRB.CreateAnd(A, IRB.CreateNot(Sa)); 2398 } 2399 } 2400 2401 /// Build the highest possible value of V, taking into account V's 2402 /// uninitialized bits. 2403 Value *getHighestPossibleValue(IRBuilder<> &IRB, Value *A, Value *Sa, 2404 bool isSigned) { 2405 if (isSigned) { 2406 // Split shadow into sign bit and other bits. 2407 Value *SaOtherBits = IRB.CreateLShr(IRB.CreateShl(Sa, 1), 1); 2408 Value *SaSignBit = IRB.CreateXor(Sa, SaOtherBits); 2409 // Minimise the undefined shadow bit, maximise other undefined bits. 2410 return 2411 IRB.CreateOr(IRB.CreateAnd(A, IRB.CreateNot(SaSignBit)), SaOtherBits); 2412 } else { 2413 // Maximize undefined bits. 2414 return IRB.CreateOr(A, Sa); 2415 } 2416 } 2417 2418 /// Instrument relational comparisons. 2419 /// 2420 /// This function does exact shadow propagation for all relational 2421 /// comparisons of integers, pointers and vectors of those. 2422 /// FIXME: output seems suboptimal when one of the operands is a constant 2423 void handleRelationalComparisonExact(ICmpInst &I) { 2424 IRBuilder<> IRB(&I); 2425 Value *A = I.getOperand(0); 2426 Value *B = I.getOperand(1); 2427 Value *Sa = getShadow(A); 2428 Value *Sb = getShadow(B); 2429 2430 // Get rid of pointers and vectors of pointers. 2431 // For ints (and vectors of ints), types of A and Sa match, 2432 // and this is a no-op. 2433 A = IRB.CreatePointerCast(A, Sa->getType()); 2434 B = IRB.CreatePointerCast(B, Sb->getType()); 2435 2436 // Let [a0, a1] be the interval of possible values of A, taking into account 2437 // its undefined bits. Let [b0, b1] be the interval of possible values of B. 2438 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0). 2439 bool IsSigned = I.isSigned(); 2440 Value *S1 = IRB.CreateICmp(I.getPredicate(), 2441 getLowestPossibleValue(IRB, A, Sa, IsSigned), 2442 getHighestPossibleValue(IRB, B, Sb, IsSigned)); 2443 Value *S2 = IRB.CreateICmp(I.getPredicate(), 2444 getHighestPossibleValue(IRB, A, Sa, IsSigned), 2445 getLowestPossibleValue(IRB, B, Sb, IsSigned)); 2446 Value *Si = IRB.CreateXor(S1, S2); 2447 setShadow(&I, Si); 2448 setOriginForNaryOp(I); 2449 } 2450 2451 /// Instrument signed relational comparisons. 2452 /// 2453 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest 2454 /// bit of the shadow. Everything else is delegated to handleShadowOr(). 2455 void handleSignedRelationalComparison(ICmpInst &I) { 2456 Constant *constOp; 2457 Value *op = nullptr; 2458 CmpInst::Predicate pre; 2459 if ((constOp = dyn_cast<Constant>(I.getOperand(1)))) { 2460 op = I.getOperand(0); 2461 pre = I.getPredicate(); 2462 } else if ((constOp = dyn_cast<Constant>(I.getOperand(0)))) { 2463 op = I.getOperand(1); 2464 pre = I.getSwappedPredicate(); 2465 } else { 2466 handleShadowOr(I); 2467 return; 2468 } 2469 2470 if ((constOp->isNullValue() && 2471 (pre == CmpInst::ICMP_SLT || pre == CmpInst::ICMP_SGE)) || 2472 (constOp->isAllOnesValue() && 2473 (pre == CmpInst::ICMP_SGT || pre == CmpInst::ICMP_SLE))) { 2474 IRBuilder<> IRB(&I); 2475 Value *Shadow = IRB.CreateICmpSLT(getShadow(op), getCleanShadow(op), 2476 "_msprop_icmp_s"); 2477 setShadow(&I, Shadow); 2478 setOrigin(&I, getOrigin(op)); 2479 } else { 2480 handleShadowOr(I); 2481 } 2482 } 2483 2484 void visitICmpInst(ICmpInst &I) { 2485 if (!ClHandleICmp) { 2486 handleShadowOr(I); 2487 return; 2488 } 2489 if (I.isEquality()) { 2490 handleEqualityComparison(I); 2491 return; 2492 } 2493 2494 assert(I.isRelational()); 2495 if (ClHandleICmpExact) { 2496 handleRelationalComparisonExact(I); 2497 return; 2498 } 2499 if (I.isSigned()) { 2500 handleSignedRelationalComparison(I); 2501 return; 2502 } 2503 2504 assert(I.isUnsigned()); 2505 if ((isa<Constant>(I.getOperand(0)) || isa<Constant>(I.getOperand(1)))) { 2506 handleRelationalComparisonExact(I); 2507 return; 2508 } 2509 2510 handleShadowOr(I); 2511 } 2512 2513 void visitFCmpInst(FCmpInst &I) { 2514 handleShadowOr(I); 2515 } 2516 2517 void handleShift(BinaryOperator &I) { 2518 IRBuilder<> IRB(&I); 2519 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2520 // Otherwise perform the same shift on S1. 2521 Value *S1 = getShadow(&I, 0); 2522 Value *S2 = getShadow(&I, 1); 2523 Value *S2Conv = IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), 2524 S2->getType()); 2525 Value *V2 = I.getOperand(1); 2526 Value *Shift = IRB.CreateBinOp(I.getOpcode(), S1, V2); 2527 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2528 setOriginForNaryOp(I); 2529 } 2530 2531 void visitShl(BinaryOperator &I) { handleShift(I); } 2532 void visitAShr(BinaryOperator &I) { handleShift(I); } 2533 void visitLShr(BinaryOperator &I) { handleShift(I); } 2534 2535 void handleFunnelShift(IntrinsicInst &I) { 2536 IRBuilder<> IRB(&I); 2537 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2538 // Otherwise perform the same shift on S0 and S1. 2539 Value *S0 = getShadow(&I, 0); 2540 Value *S1 = getShadow(&I, 1); 2541 Value *S2 = getShadow(&I, 2); 2542 Value *S2Conv = 2543 IRB.CreateSExt(IRB.CreateICmpNE(S2, getCleanShadow(S2)), S2->getType()); 2544 Value *V2 = I.getOperand(2); 2545 Function *Intrin = Intrinsic::getDeclaration( 2546 I.getModule(), I.getIntrinsicID(), S2Conv->getType()); 2547 Value *Shift = IRB.CreateCall(Intrin, {S0, S1, V2}); 2548 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2549 setOriginForNaryOp(I); 2550 } 2551 2552 /// Instrument llvm.memmove 2553 /// 2554 /// At this point we don't know if llvm.memmove will be inlined or not. 2555 /// If we don't instrument it and it gets inlined, 2556 /// our interceptor will not kick in and we will lose the memmove. 2557 /// If we instrument the call here, but it does not get inlined, 2558 /// we will memove the shadow twice: which is bad in case 2559 /// of overlapping regions. So, we simply lower the intrinsic to a call. 2560 /// 2561 /// Similar situation exists for memcpy and memset. 2562 void visitMemMoveInst(MemMoveInst &I) { 2563 IRBuilder<> IRB(&I); 2564 IRB.CreateCall( 2565 MS.MemmoveFn, 2566 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2567 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2568 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2569 I.eraseFromParent(); 2570 } 2571 2572 // Similar to memmove: avoid copying shadow twice. 2573 // This is somewhat unfortunate as it may slowdown small constant memcpys. 2574 // FIXME: consider doing manual inline for small constant sizes and proper 2575 // alignment. 2576 void visitMemCpyInst(MemCpyInst &I) { 2577 IRBuilder<> IRB(&I); 2578 IRB.CreateCall( 2579 MS.MemcpyFn, 2580 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2581 IRB.CreatePointerCast(I.getArgOperand(1), IRB.getInt8PtrTy()), 2582 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2583 I.eraseFromParent(); 2584 } 2585 2586 // Same as memcpy. 2587 void visitMemSetInst(MemSetInst &I) { 2588 IRBuilder<> IRB(&I); 2589 IRB.CreateCall( 2590 MS.MemsetFn, 2591 {IRB.CreatePointerCast(I.getArgOperand(0), IRB.getInt8PtrTy()), 2592 IRB.CreateIntCast(I.getArgOperand(1), IRB.getInt32Ty(), false), 2593 IRB.CreateIntCast(I.getArgOperand(2), MS.IntptrTy, false)}); 2594 I.eraseFromParent(); 2595 } 2596 2597 void visitVAStartInst(VAStartInst &I) { 2598 VAHelper->visitVAStartInst(I); 2599 } 2600 2601 void visitVACopyInst(VACopyInst &I) { 2602 VAHelper->visitVACopyInst(I); 2603 } 2604 2605 /// Handle vector store-like intrinsics. 2606 /// 2607 /// Instrument intrinsics that look like a simple SIMD store: writes memory, 2608 /// has 1 pointer argument and 1 vector argument, returns void. 2609 bool handleVectorStoreIntrinsic(IntrinsicInst &I) { 2610 IRBuilder<> IRB(&I); 2611 Value* Addr = I.getArgOperand(0); 2612 Value *Shadow = getShadow(&I, 1); 2613 Value *ShadowPtr, *OriginPtr; 2614 2615 // We don't know the pointer alignment (could be unaligned SSE store!). 2616 // Have to assume to worst case. 2617 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 2618 Addr, IRB, Shadow->getType(), Align(1), /*isStore*/ true); 2619 IRB.CreateAlignedStore(Shadow, ShadowPtr, Align(1)); 2620 2621 if (ClCheckAccessAddress) 2622 insertShadowCheck(Addr, &I); 2623 2624 // FIXME: factor out common code from materializeStores 2625 if (MS.TrackOrigins) IRB.CreateStore(getOrigin(&I, 1), OriginPtr); 2626 return true; 2627 } 2628 2629 /// Handle vector load-like intrinsics. 2630 /// 2631 /// Instrument intrinsics that look like a simple SIMD load: reads memory, 2632 /// has 1 pointer argument, returns a vector. 2633 bool handleVectorLoadIntrinsic(IntrinsicInst &I) { 2634 IRBuilder<> IRB(&I); 2635 Value *Addr = I.getArgOperand(0); 2636 2637 Type *ShadowTy = getShadowTy(&I); 2638 Value *ShadowPtr = nullptr, *OriginPtr = nullptr; 2639 if (PropagateShadow) { 2640 // We don't know the pointer alignment (could be unaligned SSE load!). 2641 // Have to assume to worst case. 2642 const Align Alignment = Align(1); 2643 std::tie(ShadowPtr, OriginPtr) = 2644 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 2645 setShadow(&I, 2646 IRB.CreateAlignedLoad(ShadowTy, ShadowPtr, Alignment, "_msld")); 2647 } else { 2648 setShadow(&I, getCleanShadow(&I)); 2649 } 2650 2651 if (ClCheckAccessAddress) 2652 insertShadowCheck(Addr, &I); 2653 2654 if (MS.TrackOrigins) { 2655 if (PropagateShadow) 2656 setOrigin(&I, IRB.CreateLoad(MS.OriginTy, OriginPtr)); 2657 else 2658 setOrigin(&I, getCleanOrigin()); 2659 } 2660 return true; 2661 } 2662 2663 /// Handle (SIMD arithmetic)-like intrinsics. 2664 /// 2665 /// Instrument intrinsics with any number of arguments of the same type, 2666 /// equal to the return type. The type should be simple (no aggregates or 2667 /// pointers; vectors are fine). 2668 /// Caller guarantees that this intrinsic does not access memory. 2669 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst &I) { 2670 Type *RetTy = I.getType(); 2671 if (!(RetTy->isIntOrIntVectorTy() || 2672 RetTy->isFPOrFPVectorTy() || 2673 RetTy->isX86_MMXTy())) 2674 return false; 2675 2676 unsigned NumArgOperands = I.arg_size(); 2677 for (unsigned i = 0; i < NumArgOperands; ++i) { 2678 Type *Ty = I.getArgOperand(i)->getType(); 2679 if (Ty != RetTy) 2680 return false; 2681 } 2682 2683 IRBuilder<> IRB(&I); 2684 ShadowAndOriginCombiner SC(this, IRB); 2685 for (unsigned i = 0; i < NumArgOperands; ++i) 2686 SC.Add(I.getArgOperand(i)); 2687 SC.Done(&I); 2688 2689 return true; 2690 } 2691 2692 /// Heuristically instrument unknown intrinsics. 2693 /// 2694 /// The main purpose of this code is to do something reasonable with all 2695 /// random intrinsics we might encounter, most importantly - SIMD intrinsics. 2696 /// We recognize several classes of intrinsics by their argument types and 2697 /// ModRefBehaviour and apply special instrumentation when we are reasonably 2698 /// sure that we know what the intrinsic does. 2699 /// 2700 /// We special-case intrinsics where this approach fails. See llvm.bswap 2701 /// handling as an example of that. 2702 bool handleUnknownIntrinsic(IntrinsicInst &I) { 2703 unsigned NumArgOperands = I.arg_size(); 2704 if (NumArgOperands == 0) 2705 return false; 2706 2707 if (NumArgOperands == 2 && 2708 I.getArgOperand(0)->getType()->isPointerTy() && 2709 I.getArgOperand(1)->getType()->isVectorTy() && 2710 I.getType()->isVoidTy() && 2711 !I.onlyReadsMemory()) { 2712 // This looks like a vector store. 2713 return handleVectorStoreIntrinsic(I); 2714 } 2715 2716 if (NumArgOperands == 1 && 2717 I.getArgOperand(0)->getType()->isPointerTy() && 2718 I.getType()->isVectorTy() && 2719 I.onlyReadsMemory()) { 2720 // This looks like a vector load. 2721 return handleVectorLoadIntrinsic(I); 2722 } 2723 2724 if (I.doesNotAccessMemory()) 2725 if (maybeHandleSimpleNomemIntrinsic(I)) 2726 return true; 2727 2728 // FIXME: detect and handle SSE maskstore/maskload 2729 return false; 2730 } 2731 2732 void handleInvariantGroup(IntrinsicInst &I) { 2733 setShadow(&I, getShadow(&I, 0)); 2734 setOrigin(&I, getOrigin(&I, 0)); 2735 } 2736 2737 void handleLifetimeStart(IntrinsicInst &I) { 2738 if (!PoisonStack) 2739 return; 2740 AllocaInst *AI = llvm::findAllocaForValue(I.getArgOperand(1)); 2741 if (!AI) 2742 InstrumentLifetimeStart = false; 2743 LifetimeStartList.push_back(std::make_pair(&I, AI)); 2744 } 2745 2746 void handleBswap(IntrinsicInst &I) { 2747 IRBuilder<> IRB(&I); 2748 Value *Op = I.getArgOperand(0); 2749 Type *OpType = Op->getType(); 2750 Function *BswapFunc = Intrinsic::getDeclaration( 2751 F.getParent(), Intrinsic::bswap, makeArrayRef(&OpType, 1)); 2752 setShadow(&I, IRB.CreateCall(BswapFunc, getShadow(Op))); 2753 setOrigin(&I, getOrigin(Op)); 2754 } 2755 2756 // Instrument vector convert intrinsic. 2757 // 2758 // This function instruments intrinsics like cvtsi2ss: 2759 // %Out = int_xxx_cvtyyy(%ConvertOp) 2760 // or 2761 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp) 2762 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same 2763 // number \p Out elements, and (if has 2 arguments) copies the rest of the 2764 // elements from \p CopyOp. 2765 // In most cases conversion involves floating-point value which may trigger a 2766 // hardware exception when not fully initialized. For this reason we require 2767 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise. 2768 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p 2769 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always 2770 // return a fully initialized value. 2771 void handleVectorConvertIntrinsic(IntrinsicInst &I, int NumUsedElements, 2772 bool HasRoundingMode = false) { 2773 IRBuilder<> IRB(&I); 2774 Value *CopyOp, *ConvertOp; 2775 2776 assert((!HasRoundingMode || 2777 isa<ConstantInt>(I.getArgOperand(I.arg_size() - 1))) && 2778 "Invalid rounding mode"); 2779 2780 switch (I.arg_size() - HasRoundingMode) { 2781 case 2: 2782 CopyOp = I.getArgOperand(0); 2783 ConvertOp = I.getArgOperand(1); 2784 break; 2785 case 1: 2786 ConvertOp = I.getArgOperand(0); 2787 CopyOp = nullptr; 2788 break; 2789 default: 2790 llvm_unreachable("Cvt intrinsic with unsupported number of arguments."); 2791 } 2792 2793 // The first *NumUsedElements* elements of ConvertOp are converted to the 2794 // same number of output elements. The rest of the output is copied from 2795 // CopyOp, or (if not available) filled with zeroes. 2796 // Combine shadow for elements of ConvertOp that are used in this operation, 2797 // and insert a check. 2798 // FIXME: consider propagating shadow of ConvertOp, at least in the case of 2799 // int->any conversion. 2800 Value *ConvertShadow = getShadow(ConvertOp); 2801 Value *AggShadow = nullptr; 2802 if (ConvertOp->getType()->isVectorTy()) { 2803 AggShadow = IRB.CreateExtractElement( 2804 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 2805 for (int i = 1; i < NumUsedElements; ++i) { 2806 Value *MoreShadow = IRB.CreateExtractElement( 2807 ConvertShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 2808 AggShadow = IRB.CreateOr(AggShadow, MoreShadow); 2809 } 2810 } else { 2811 AggShadow = ConvertShadow; 2812 } 2813 assert(AggShadow->getType()->isIntegerTy()); 2814 insertShadowCheck(AggShadow, getOrigin(ConvertOp), &I); 2815 2816 // Build result shadow by zero-filling parts of CopyOp shadow that come from 2817 // ConvertOp. 2818 if (CopyOp) { 2819 assert(CopyOp->getType() == I.getType()); 2820 assert(CopyOp->getType()->isVectorTy()); 2821 Value *ResultShadow = getShadow(CopyOp); 2822 Type *EltTy = cast<VectorType>(ResultShadow->getType())->getElementType(); 2823 for (int i = 0; i < NumUsedElements; ++i) { 2824 ResultShadow = IRB.CreateInsertElement( 2825 ResultShadow, ConstantInt::getNullValue(EltTy), 2826 ConstantInt::get(IRB.getInt32Ty(), i)); 2827 } 2828 setShadow(&I, ResultShadow); 2829 setOrigin(&I, getOrigin(CopyOp)); 2830 } else { 2831 setShadow(&I, getCleanShadow(&I)); 2832 setOrigin(&I, getCleanOrigin()); 2833 } 2834 } 2835 2836 // Given a scalar or vector, extract lower 64 bits (or less), and return all 2837 // zeroes if it is zero, and all ones otherwise. 2838 Value *Lower64ShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2839 if (S->getType()->isVectorTy()) 2840 S = CreateShadowCast(IRB, S, IRB.getInt64Ty(), /* Signed */ true); 2841 assert(S->getType()->getPrimitiveSizeInBits() <= 64); 2842 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2843 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2844 } 2845 2846 // Given a vector, extract its first element, and return all 2847 // zeroes if it is zero, and all ones otherwise. 2848 Value *LowerElementShadowExtend(IRBuilder<> &IRB, Value *S, Type *T) { 2849 Value *S1 = IRB.CreateExtractElement(S, (uint64_t)0); 2850 Value *S2 = IRB.CreateICmpNE(S1, getCleanShadow(S1)); 2851 return CreateShadowCast(IRB, S2, T, /* Signed */ true); 2852 } 2853 2854 Value *VariableShadowExtend(IRBuilder<> &IRB, Value *S) { 2855 Type *T = S->getType(); 2856 assert(T->isVectorTy()); 2857 Value *S2 = IRB.CreateICmpNE(S, getCleanShadow(S)); 2858 return IRB.CreateSExt(S2, T); 2859 } 2860 2861 // Instrument vector shift intrinsic. 2862 // 2863 // This function instruments intrinsics like int_x86_avx2_psll_w. 2864 // Intrinsic shifts %In by %ShiftSize bits. 2865 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift 2866 // size, and the rest is ignored. Behavior is defined even if shift size is 2867 // greater than register (or field) width. 2868 void handleVectorShiftIntrinsic(IntrinsicInst &I, bool Variable) { 2869 assert(I.arg_size() == 2); 2870 IRBuilder<> IRB(&I); 2871 // If any of the S2 bits are poisoned, the whole thing is poisoned. 2872 // Otherwise perform the same shift on S1. 2873 Value *S1 = getShadow(&I, 0); 2874 Value *S2 = getShadow(&I, 1); 2875 Value *S2Conv = Variable ? VariableShadowExtend(IRB, S2) 2876 : Lower64ShadowExtend(IRB, S2, getShadowTy(&I)); 2877 Value *V1 = I.getOperand(0); 2878 Value *V2 = I.getOperand(1); 2879 Value *Shift = IRB.CreateCall(I.getFunctionType(), I.getCalledOperand(), 2880 {IRB.CreateBitCast(S1, V1->getType()), V2}); 2881 Shift = IRB.CreateBitCast(Shift, getShadowTy(&I)); 2882 setShadow(&I, IRB.CreateOr(Shift, S2Conv)); 2883 setOriginForNaryOp(I); 2884 } 2885 2886 // Get an X86_MMX-sized vector type. 2887 Type *getMMXVectorTy(unsigned EltSizeInBits) { 2888 const unsigned X86_MMXSizeInBits = 64; 2889 assert(EltSizeInBits != 0 && (X86_MMXSizeInBits % EltSizeInBits) == 0 && 2890 "Illegal MMX vector element size"); 2891 return FixedVectorType::get(IntegerType::get(*MS.C, EltSizeInBits), 2892 X86_MMXSizeInBits / EltSizeInBits); 2893 } 2894 2895 // Returns a signed counterpart for an (un)signed-saturate-and-pack 2896 // intrinsic. 2897 Intrinsic::ID getSignedPackIntrinsic(Intrinsic::ID id) { 2898 switch (id) { 2899 case Intrinsic::x86_sse2_packsswb_128: 2900 case Intrinsic::x86_sse2_packuswb_128: 2901 return Intrinsic::x86_sse2_packsswb_128; 2902 2903 case Intrinsic::x86_sse2_packssdw_128: 2904 case Intrinsic::x86_sse41_packusdw: 2905 return Intrinsic::x86_sse2_packssdw_128; 2906 2907 case Intrinsic::x86_avx2_packsswb: 2908 case Intrinsic::x86_avx2_packuswb: 2909 return Intrinsic::x86_avx2_packsswb; 2910 2911 case Intrinsic::x86_avx2_packssdw: 2912 case Intrinsic::x86_avx2_packusdw: 2913 return Intrinsic::x86_avx2_packssdw; 2914 2915 case Intrinsic::x86_mmx_packsswb: 2916 case Intrinsic::x86_mmx_packuswb: 2917 return Intrinsic::x86_mmx_packsswb; 2918 2919 case Intrinsic::x86_mmx_packssdw: 2920 return Intrinsic::x86_mmx_packssdw; 2921 default: 2922 llvm_unreachable("unexpected intrinsic id"); 2923 } 2924 } 2925 2926 // Instrument vector pack intrinsic. 2927 // 2928 // This function instruments intrinsics like x86_mmx_packsswb, that 2929 // packs elements of 2 input vectors into half as many bits with saturation. 2930 // Shadow is propagated with the signed variant of the same intrinsic applied 2931 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer). 2932 // EltSizeInBits is used only for x86mmx arguments. 2933 void handleVectorPackIntrinsic(IntrinsicInst &I, unsigned EltSizeInBits = 0) { 2934 assert(I.arg_size() == 2); 2935 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2936 IRBuilder<> IRB(&I); 2937 Value *S1 = getShadow(&I, 0); 2938 Value *S2 = getShadow(&I, 1); 2939 assert(isX86_MMX || S1->getType()->isVectorTy()); 2940 2941 // SExt and ICmpNE below must apply to individual elements of input vectors. 2942 // In case of x86mmx arguments, cast them to appropriate vector types and 2943 // back. 2944 Type *T = isX86_MMX ? getMMXVectorTy(EltSizeInBits) : S1->getType(); 2945 if (isX86_MMX) { 2946 S1 = IRB.CreateBitCast(S1, T); 2947 S2 = IRB.CreateBitCast(S2, T); 2948 } 2949 Value *S1_ext = IRB.CreateSExt( 2950 IRB.CreateICmpNE(S1, Constant::getNullValue(T)), T); 2951 Value *S2_ext = IRB.CreateSExt( 2952 IRB.CreateICmpNE(S2, Constant::getNullValue(T)), T); 2953 if (isX86_MMX) { 2954 Type *X86_MMXTy = Type::getX86_MMXTy(*MS.C); 2955 S1_ext = IRB.CreateBitCast(S1_ext, X86_MMXTy); 2956 S2_ext = IRB.CreateBitCast(S2_ext, X86_MMXTy); 2957 } 2958 2959 Function *ShadowFn = Intrinsic::getDeclaration( 2960 F.getParent(), getSignedPackIntrinsic(I.getIntrinsicID())); 2961 2962 Value *S = 2963 IRB.CreateCall(ShadowFn, {S1_ext, S2_ext}, "_msprop_vector_pack"); 2964 if (isX86_MMX) S = IRB.CreateBitCast(S, getShadowTy(&I)); 2965 setShadow(&I, S); 2966 setOriginForNaryOp(I); 2967 } 2968 2969 // Instrument sum-of-absolute-differences intrinsic. 2970 void handleVectorSadIntrinsic(IntrinsicInst &I) { 2971 const unsigned SignificantBitsPerResultElement = 16; 2972 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2973 Type *ResTy = isX86_MMX ? IntegerType::get(*MS.C, 64) : I.getType(); 2974 unsigned ZeroBitsPerResultElement = 2975 ResTy->getScalarSizeInBits() - SignificantBitsPerResultElement; 2976 2977 IRBuilder<> IRB(&I); 2978 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2979 S = IRB.CreateBitCast(S, ResTy); 2980 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2981 ResTy); 2982 S = IRB.CreateLShr(S, ZeroBitsPerResultElement); 2983 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2984 setShadow(&I, S); 2985 setOriginForNaryOp(I); 2986 } 2987 2988 // Instrument multiply-add intrinsic. 2989 void handleVectorPmaddIntrinsic(IntrinsicInst &I, 2990 unsigned EltSizeInBits = 0) { 2991 bool isX86_MMX = I.getOperand(0)->getType()->isX86_MMXTy(); 2992 Type *ResTy = isX86_MMX ? getMMXVectorTy(EltSizeInBits * 2) : I.getType(); 2993 IRBuilder<> IRB(&I); 2994 Value *S = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 2995 S = IRB.CreateBitCast(S, ResTy); 2996 S = IRB.CreateSExt(IRB.CreateICmpNE(S, Constant::getNullValue(ResTy)), 2997 ResTy); 2998 S = IRB.CreateBitCast(S, getShadowTy(&I)); 2999 setShadow(&I, S); 3000 setOriginForNaryOp(I); 3001 } 3002 3003 // Instrument compare-packed intrinsic. 3004 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or 3005 // all-ones shadow. 3006 void handleVectorComparePackedIntrinsic(IntrinsicInst &I) { 3007 IRBuilder<> IRB(&I); 3008 Type *ResTy = getShadowTy(&I); 3009 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3010 Value *S = IRB.CreateSExt( 3011 IRB.CreateICmpNE(S0, Constant::getNullValue(ResTy)), ResTy); 3012 setShadow(&I, S); 3013 setOriginForNaryOp(I); 3014 } 3015 3016 // Instrument compare-scalar intrinsic. 3017 // This handles both cmp* intrinsics which return the result in the first 3018 // element of a vector, and comi* which return the result as i32. 3019 void handleVectorCompareScalarIntrinsic(IntrinsicInst &I) { 3020 IRBuilder<> IRB(&I); 3021 Value *S0 = IRB.CreateOr(getShadow(&I, 0), getShadow(&I, 1)); 3022 Value *S = LowerElementShadowExtend(IRB, S0, getShadowTy(&I)); 3023 setShadow(&I, S); 3024 setOriginForNaryOp(I); 3025 } 3026 3027 // Instrument generic vector reduction intrinsics 3028 // by ORing together all their fields. 3029 void handleVectorReduceIntrinsic(IntrinsicInst &I) { 3030 IRBuilder<> IRB(&I); 3031 Value *S = IRB.CreateOrReduce(getShadow(&I, 0)); 3032 setShadow(&I, S); 3033 setOrigin(&I, getOrigin(&I, 0)); 3034 } 3035 3036 // Instrument vector.reduce.or intrinsic. 3037 // Valid (non-poisoned) set bits in the operand pull low the 3038 // corresponding shadow bits. 3039 void handleVectorReduceOrIntrinsic(IntrinsicInst &I) { 3040 IRBuilder<> IRB(&I); 3041 Value *OperandShadow = getShadow(&I, 0); 3042 Value *OperandUnsetBits = IRB.CreateNot(I.getOperand(0)); 3043 Value *OperandUnsetOrPoison = IRB.CreateOr(OperandUnsetBits, OperandShadow); 3044 // Bit N is clean if any field's bit N is 1 and unpoison 3045 Value *OutShadowMask = IRB.CreateAndReduce(OperandUnsetOrPoison); 3046 // Otherwise, it is clean if every field's bit N is unpoison 3047 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3048 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3049 3050 setShadow(&I, S); 3051 setOrigin(&I, getOrigin(&I, 0)); 3052 } 3053 3054 // Instrument vector.reduce.and intrinsic. 3055 // Valid (non-poisoned) unset bits in the operand pull down the 3056 // corresponding shadow bits. 3057 void handleVectorReduceAndIntrinsic(IntrinsicInst &I) { 3058 IRBuilder<> IRB(&I); 3059 Value *OperandShadow = getShadow(&I, 0); 3060 Value *OperandSetOrPoison = IRB.CreateOr(I.getOperand(0), OperandShadow); 3061 // Bit N is clean if any field's bit N is 0 and unpoison 3062 Value *OutShadowMask = IRB.CreateAndReduce(OperandSetOrPoison); 3063 // Otherwise, it is clean if every field's bit N is unpoison 3064 Value *OrShadow = IRB.CreateOrReduce(OperandShadow); 3065 Value *S = IRB.CreateAnd(OutShadowMask, OrShadow); 3066 3067 setShadow(&I, S); 3068 setOrigin(&I, getOrigin(&I, 0)); 3069 } 3070 3071 void handleStmxcsr(IntrinsicInst &I) { 3072 IRBuilder<> IRB(&I); 3073 Value* Addr = I.getArgOperand(0); 3074 Type *Ty = IRB.getInt32Ty(); 3075 Value *ShadowPtr = 3076 getShadowOriginPtr(Addr, IRB, Ty, Align(1), /*isStore*/ true).first; 3077 3078 IRB.CreateStore(getCleanShadow(Ty), 3079 IRB.CreatePointerCast(ShadowPtr, Ty->getPointerTo())); 3080 3081 if (ClCheckAccessAddress) 3082 insertShadowCheck(Addr, &I); 3083 } 3084 3085 void handleLdmxcsr(IntrinsicInst &I) { 3086 if (!InsertChecks) return; 3087 3088 IRBuilder<> IRB(&I); 3089 Value *Addr = I.getArgOperand(0); 3090 Type *Ty = IRB.getInt32Ty(); 3091 const Align Alignment = Align(1); 3092 Value *ShadowPtr, *OriginPtr; 3093 std::tie(ShadowPtr, OriginPtr) = 3094 getShadowOriginPtr(Addr, IRB, Ty, Alignment, /*isStore*/ false); 3095 3096 if (ClCheckAccessAddress) 3097 insertShadowCheck(Addr, &I); 3098 3099 Value *Shadow = IRB.CreateAlignedLoad(Ty, ShadowPtr, Alignment, "_ldmxcsr"); 3100 Value *Origin = MS.TrackOrigins ? IRB.CreateLoad(MS.OriginTy, OriginPtr) 3101 : getCleanOrigin(); 3102 insertShadowCheck(Shadow, Origin, &I); 3103 } 3104 3105 void handleMaskedStore(IntrinsicInst &I) { 3106 IRBuilder<> IRB(&I); 3107 Value *V = I.getArgOperand(0); 3108 Value *Addr = I.getArgOperand(1); 3109 const Align Alignment( 3110 cast<ConstantInt>(I.getArgOperand(2))->getZExtValue()); 3111 Value *Mask = I.getArgOperand(3); 3112 Value *Shadow = getShadow(V); 3113 3114 Value *ShadowPtr; 3115 Value *OriginPtr; 3116 std::tie(ShadowPtr, OriginPtr) = getShadowOriginPtr( 3117 Addr, IRB, Shadow->getType(), Alignment, /*isStore*/ true); 3118 3119 if (ClCheckAccessAddress) { 3120 insertShadowCheck(Addr, &I); 3121 // Uninitialized mask is kind of like uninitialized address, but not as 3122 // scary. 3123 insertShadowCheck(Mask, &I); 3124 } 3125 3126 IRB.CreateMaskedStore(Shadow, ShadowPtr, Alignment, Mask); 3127 3128 if (MS.TrackOrigins) { 3129 auto &DL = F.getParent()->getDataLayout(); 3130 paintOrigin(IRB, getOrigin(V), OriginPtr, 3131 DL.getTypeStoreSize(Shadow->getType()), 3132 std::max(Alignment, kMinOriginAlignment)); 3133 } 3134 } 3135 3136 bool handleMaskedLoad(IntrinsicInst &I) { 3137 IRBuilder<> IRB(&I); 3138 Value *Addr = I.getArgOperand(0); 3139 const Align Alignment( 3140 cast<ConstantInt>(I.getArgOperand(1))->getZExtValue()); 3141 Value *Mask = I.getArgOperand(2); 3142 Value *PassThru = I.getArgOperand(3); 3143 3144 Type *ShadowTy = getShadowTy(&I); 3145 Value *ShadowPtr, *OriginPtr; 3146 if (PropagateShadow) { 3147 std::tie(ShadowPtr, OriginPtr) = 3148 getShadowOriginPtr(Addr, IRB, ShadowTy, Alignment, /*isStore*/ false); 3149 setShadow(&I, IRB.CreateMaskedLoad(ShadowTy, ShadowPtr, Alignment, Mask, 3150 getShadow(PassThru), "_msmaskedld")); 3151 } else { 3152 setShadow(&I, getCleanShadow(&I)); 3153 } 3154 3155 if (ClCheckAccessAddress) { 3156 insertShadowCheck(Addr, &I); 3157 insertShadowCheck(Mask, &I); 3158 } 3159 3160 if (MS.TrackOrigins) { 3161 if (PropagateShadow) { 3162 // Choose between PassThru's and the loaded value's origins. 3163 Value *MaskedPassThruShadow = IRB.CreateAnd( 3164 getShadow(PassThru), IRB.CreateSExt(IRB.CreateNeg(Mask), ShadowTy)); 3165 3166 Value *Acc = IRB.CreateExtractElement( 3167 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), 0)); 3168 for (int i = 1, N = cast<FixedVectorType>(PassThru->getType()) 3169 ->getNumElements(); 3170 i < N; ++i) { 3171 Value *More = IRB.CreateExtractElement( 3172 MaskedPassThruShadow, ConstantInt::get(IRB.getInt32Ty(), i)); 3173 Acc = IRB.CreateOr(Acc, More); 3174 } 3175 3176 Value *Origin = IRB.CreateSelect( 3177 IRB.CreateICmpNE(Acc, Constant::getNullValue(Acc->getType())), 3178 getOrigin(PassThru), IRB.CreateLoad(MS.OriginTy, OriginPtr)); 3179 3180 setOrigin(&I, Origin); 3181 } else { 3182 setOrigin(&I, getCleanOrigin()); 3183 } 3184 } 3185 return true; 3186 } 3187 3188 // Instrument BMI / BMI2 intrinsics. 3189 // All of these intrinsics are Z = I(X, Y) 3190 // where the types of all operands and the result match, and are either i32 or i64. 3191 // The following instrumentation happens to work for all of them: 3192 // Sz = I(Sx, Y) | (sext (Sy != 0)) 3193 void handleBmiIntrinsic(IntrinsicInst &I) { 3194 IRBuilder<> IRB(&I); 3195 Type *ShadowTy = getShadowTy(&I); 3196 3197 // If any bit of the mask operand is poisoned, then the whole thing is. 3198 Value *SMask = getShadow(&I, 1); 3199 SMask = IRB.CreateSExt(IRB.CreateICmpNE(SMask, getCleanShadow(ShadowTy)), 3200 ShadowTy); 3201 // Apply the same intrinsic to the shadow of the first operand. 3202 Value *S = IRB.CreateCall(I.getCalledFunction(), 3203 {getShadow(&I, 0), I.getOperand(1)}); 3204 S = IRB.CreateOr(SMask, S); 3205 setShadow(&I, S); 3206 setOriginForNaryOp(I); 3207 } 3208 3209 SmallVector<int, 8> getPclmulMask(unsigned Width, bool OddElements) { 3210 SmallVector<int, 8> Mask; 3211 for (unsigned X = OddElements ? 1 : 0; X < Width; X += 2) { 3212 Mask.append(2, X); 3213 } 3214 return Mask; 3215 } 3216 3217 // Instrument pclmul intrinsics. 3218 // These intrinsics operate either on odd or on even elements of the input 3219 // vectors, depending on the constant in the 3rd argument, ignoring the rest. 3220 // Replace the unused elements with copies of the used ones, ex: 3221 // (0, 1, 2, 3) -> (0, 0, 2, 2) (even case) 3222 // or 3223 // (0, 1, 2, 3) -> (1, 1, 3, 3) (odd case) 3224 // and then apply the usual shadow combining logic. 3225 void handlePclmulIntrinsic(IntrinsicInst &I) { 3226 IRBuilder<> IRB(&I); 3227 unsigned Width = 3228 cast<FixedVectorType>(I.getArgOperand(0)->getType())->getNumElements(); 3229 assert(isa<ConstantInt>(I.getArgOperand(2)) && 3230 "pclmul 3rd operand must be a constant"); 3231 unsigned Imm = cast<ConstantInt>(I.getArgOperand(2))->getZExtValue(); 3232 Value *Shuf0 = IRB.CreateShuffleVector(getShadow(&I, 0), 3233 getPclmulMask(Width, Imm & 0x01)); 3234 Value *Shuf1 = IRB.CreateShuffleVector(getShadow(&I, 1), 3235 getPclmulMask(Width, Imm & 0x10)); 3236 ShadowAndOriginCombiner SOC(this, IRB); 3237 SOC.Add(Shuf0, getOrigin(&I, 0)); 3238 SOC.Add(Shuf1, getOrigin(&I, 1)); 3239 SOC.Done(&I); 3240 } 3241 3242 // Instrument _mm_*_sd intrinsics 3243 void handleUnarySdIntrinsic(IntrinsicInst &I) { 3244 IRBuilder<> IRB(&I); 3245 Value *First = getShadow(&I, 0); 3246 Value *Second = getShadow(&I, 1); 3247 // High word of first operand, low word of second 3248 Value *Shadow = 3249 IRB.CreateShuffleVector(First, Second, llvm::makeArrayRef<int>({2, 1})); 3250 3251 setShadow(&I, Shadow); 3252 setOriginForNaryOp(I); 3253 } 3254 3255 void handleBinarySdIntrinsic(IntrinsicInst &I) { 3256 IRBuilder<> IRB(&I); 3257 Value *First = getShadow(&I, 0); 3258 Value *Second = getShadow(&I, 1); 3259 Value *OrShadow = IRB.CreateOr(First, Second); 3260 // High word of first operand, low word of both OR'd together 3261 Value *Shadow = IRB.CreateShuffleVector(First, OrShadow, 3262 llvm::makeArrayRef<int>({2, 1})); 3263 3264 setShadow(&I, Shadow); 3265 setOriginForNaryOp(I); 3266 } 3267 3268 // Instrument abs intrinsic. 3269 // handleUnknownIntrinsic can't handle it because of the last 3270 // is_int_min_poison argument which does not match the result type. 3271 void handleAbsIntrinsic(IntrinsicInst &I) { 3272 assert(I.getType()->isIntOrIntVectorTy()); 3273 assert(I.getArgOperand(0)->getType() == I.getType()); 3274 3275 // FIXME: Handle is_int_min_poison. 3276 IRBuilder<> IRB(&I); 3277 setShadow(&I, getShadow(&I, 0)); 3278 setOrigin(&I, getOrigin(&I, 0)); 3279 } 3280 3281 void visitIntrinsicInst(IntrinsicInst &I) { 3282 switch (I.getIntrinsicID()) { 3283 case Intrinsic::abs: 3284 handleAbsIntrinsic(I); 3285 break; 3286 case Intrinsic::lifetime_start: 3287 handleLifetimeStart(I); 3288 break; 3289 case Intrinsic::launder_invariant_group: 3290 case Intrinsic::strip_invariant_group: 3291 handleInvariantGroup(I); 3292 break; 3293 case Intrinsic::bswap: 3294 handleBswap(I); 3295 break; 3296 case Intrinsic::masked_store: 3297 handleMaskedStore(I); 3298 break; 3299 case Intrinsic::masked_load: 3300 handleMaskedLoad(I); 3301 break; 3302 case Intrinsic::vector_reduce_and: 3303 handleVectorReduceAndIntrinsic(I); 3304 break; 3305 case Intrinsic::vector_reduce_or: 3306 handleVectorReduceOrIntrinsic(I); 3307 break; 3308 case Intrinsic::vector_reduce_add: 3309 case Intrinsic::vector_reduce_xor: 3310 case Intrinsic::vector_reduce_mul: 3311 handleVectorReduceIntrinsic(I); 3312 break; 3313 case Intrinsic::x86_sse_stmxcsr: 3314 handleStmxcsr(I); 3315 break; 3316 case Intrinsic::x86_sse_ldmxcsr: 3317 handleLdmxcsr(I); 3318 break; 3319 case Intrinsic::x86_avx512_vcvtsd2usi64: 3320 case Intrinsic::x86_avx512_vcvtsd2usi32: 3321 case Intrinsic::x86_avx512_vcvtss2usi64: 3322 case Intrinsic::x86_avx512_vcvtss2usi32: 3323 case Intrinsic::x86_avx512_cvttss2usi64: 3324 case Intrinsic::x86_avx512_cvttss2usi: 3325 case Intrinsic::x86_avx512_cvttsd2usi64: 3326 case Intrinsic::x86_avx512_cvttsd2usi: 3327 case Intrinsic::x86_avx512_cvtusi2ss: 3328 case Intrinsic::x86_avx512_cvtusi642sd: 3329 case Intrinsic::x86_avx512_cvtusi642ss: 3330 handleVectorConvertIntrinsic(I, 1, true); 3331 break; 3332 case Intrinsic::x86_sse2_cvtsd2si64: 3333 case Intrinsic::x86_sse2_cvtsd2si: 3334 case Intrinsic::x86_sse2_cvtsd2ss: 3335 case Intrinsic::x86_sse2_cvttsd2si64: 3336 case Intrinsic::x86_sse2_cvttsd2si: 3337 case Intrinsic::x86_sse_cvtss2si64: 3338 case Intrinsic::x86_sse_cvtss2si: 3339 case Intrinsic::x86_sse_cvttss2si64: 3340 case Intrinsic::x86_sse_cvttss2si: 3341 handleVectorConvertIntrinsic(I, 1); 3342 break; 3343 case Intrinsic::x86_sse_cvtps2pi: 3344 case Intrinsic::x86_sse_cvttps2pi: 3345 handleVectorConvertIntrinsic(I, 2); 3346 break; 3347 3348 case Intrinsic::x86_avx512_psll_w_512: 3349 case Intrinsic::x86_avx512_psll_d_512: 3350 case Intrinsic::x86_avx512_psll_q_512: 3351 case Intrinsic::x86_avx512_pslli_w_512: 3352 case Intrinsic::x86_avx512_pslli_d_512: 3353 case Intrinsic::x86_avx512_pslli_q_512: 3354 case Intrinsic::x86_avx512_psrl_w_512: 3355 case Intrinsic::x86_avx512_psrl_d_512: 3356 case Intrinsic::x86_avx512_psrl_q_512: 3357 case Intrinsic::x86_avx512_psra_w_512: 3358 case Intrinsic::x86_avx512_psra_d_512: 3359 case Intrinsic::x86_avx512_psra_q_512: 3360 case Intrinsic::x86_avx512_psrli_w_512: 3361 case Intrinsic::x86_avx512_psrli_d_512: 3362 case Intrinsic::x86_avx512_psrli_q_512: 3363 case Intrinsic::x86_avx512_psrai_w_512: 3364 case Intrinsic::x86_avx512_psrai_d_512: 3365 case Intrinsic::x86_avx512_psrai_q_512: 3366 case Intrinsic::x86_avx512_psra_q_256: 3367 case Intrinsic::x86_avx512_psra_q_128: 3368 case Intrinsic::x86_avx512_psrai_q_256: 3369 case Intrinsic::x86_avx512_psrai_q_128: 3370 case Intrinsic::x86_avx2_psll_w: 3371 case Intrinsic::x86_avx2_psll_d: 3372 case Intrinsic::x86_avx2_psll_q: 3373 case Intrinsic::x86_avx2_pslli_w: 3374 case Intrinsic::x86_avx2_pslli_d: 3375 case Intrinsic::x86_avx2_pslli_q: 3376 case Intrinsic::x86_avx2_psrl_w: 3377 case Intrinsic::x86_avx2_psrl_d: 3378 case Intrinsic::x86_avx2_psrl_q: 3379 case Intrinsic::x86_avx2_psra_w: 3380 case Intrinsic::x86_avx2_psra_d: 3381 case Intrinsic::x86_avx2_psrli_w: 3382 case Intrinsic::x86_avx2_psrli_d: 3383 case Intrinsic::x86_avx2_psrli_q: 3384 case Intrinsic::x86_avx2_psrai_w: 3385 case Intrinsic::x86_avx2_psrai_d: 3386 case Intrinsic::x86_sse2_psll_w: 3387 case Intrinsic::x86_sse2_psll_d: 3388 case Intrinsic::x86_sse2_psll_q: 3389 case Intrinsic::x86_sse2_pslli_w: 3390 case Intrinsic::x86_sse2_pslli_d: 3391 case Intrinsic::x86_sse2_pslli_q: 3392 case Intrinsic::x86_sse2_psrl_w: 3393 case Intrinsic::x86_sse2_psrl_d: 3394 case Intrinsic::x86_sse2_psrl_q: 3395 case Intrinsic::x86_sse2_psra_w: 3396 case Intrinsic::x86_sse2_psra_d: 3397 case Intrinsic::x86_sse2_psrli_w: 3398 case Intrinsic::x86_sse2_psrli_d: 3399 case Intrinsic::x86_sse2_psrli_q: 3400 case Intrinsic::x86_sse2_psrai_w: 3401 case Intrinsic::x86_sse2_psrai_d: 3402 case Intrinsic::x86_mmx_psll_w: 3403 case Intrinsic::x86_mmx_psll_d: 3404 case Intrinsic::x86_mmx_psll_q: 3405 case Intrinsic::x86_mmx_pslli_w: 3406 case Intrinsic::x86_mmx_pslli_d: 3407 case Intrinsic::x86_mmx_pslli_q: 3408 case Intrinsic::x86_mmx_psrl_w: 3409 case Intrinsic::x86_mmx_psrl_d: 3410 case Intrinsic::x86_mmx_psrl_q: 3411 case Intrinsic::x86_mmx_psra_w: 3412 case Intrinsic::x86_mmx_psra_d: 3413 case Intrinsic::x86_mmx_psrli_w: 3414 case Intrinsic::x86_mmx_psrli_d: 3415 case Intrinsic::x86_mmx_psrli_q: 3416 case Intrinsic::x86_mmx_psrai_w: 3417 case Intrinsic::x86_mmx_psrai_d: 3418 handleVectorShiftIntrinsic(I, /* Variable */ false); 3419 break; 3420 case Intrinsic::x86_avx2_psllv_d: 3421 case Intrinsic::x86_avx2_psllv_d_256: 3422 case Intrinsic::x86_avx512_psllv_d_512: 3423 case Intrinsic::x86_avx2_psllv_q: 3424 case Intrinsic::x86_avx2_psllv_q_256: 3425 case Intrinsic::x86_avx512_psllv_q_512: 3426 case Intrinsic::x86_avx2_psrlv_d: 3427 case Intrinsic::x86_avx2_psrlv_d_256: 3428 case Intrinsic::x86_avx512_psrlv_d_512: 3429 case Intrinsic::x86_avx2_psrlv_q: 3430 case Intrinsic::x86_avx2_psrlv_q_256: 3431 case Intrinsic::x86_avx512_psrlv_q_512: 3432 case Intrinsic::x86_avx2_psrav_d: 3433 case Intrinsic::x86_avx2_psrav_d_256: 3434 case Intrinsic::x86_avx512_psrav_d_512: 3435 case Intrinsic::x86_avx512_psrav_q_128: 3436 case Intrinsic::x86_avx512_psrav_q_256: 3437 case Intrinsic::x86_avx512_psrav_q_512: 3438 handleVectorShiftIntrinsic(I, /* Variable */ true); 3439 break; 3440 3441 case Intrinsic::x86_sse2_packsswb_128: 3442 case Intrinsic::x86_sse2_packssdw_128: 3443 case Intrinsic::x86_sse2_packuswb_128: 3444 case Intrinsic::x86_sse41_packusdw: 3445 case Intrinsic::x86_avx2_packsswb: 3446 case Intrinsic::x86_avx2_packssdw: 3447 case Intrinsic::x86_avx2_packuswb: 3448 case Intrinsic::x86_avx2_packusdw: 3449 handleVectorPackIntrinsic(I); 3450 break; 3451 3452 case Intrinsic::x86_mmx_packsswb: 3453 case Intrinsic::x86_mmx_packuswb: 3454 handleVectorPackIntrinsic(I, 16); 3455 break; 3456 3457 case Intrinsic::x86_mmx_packssdw: 3458 handleVectorPackIntrinsic(I, 32); 3459 break; 3460 3461 case Intrinsic::x86_mmx_psad_bw: 3462 case Intrinsic::x86_sse2_psad_bw: 3463 case Intrinsic::x86_avx2_psad_bw: 3464 handleVectorSadIntrinsic(I); 3465 break; 3466 3467 case Intrinsic::x86_sse2_pmadd_wd: 3468 case Intrinsic::x86_avx2_pmadd_wd: 3469 case Intrinsic::x86_ssse3_pmadd_ub_sw_128: 3470 case Intrinsic::x86_avx2_pmadd_ub_sw: 3471 handleVectorPmaddIntrinsic(I); 3472 break; 3473 3474 case Intrinsic::x86_ssse3_pmadd_ub_sw: 3475 handleVectorPmaddIntrinsic(I, 8); 3476 break; 3477 3478 case Intrinsic::x86_mmx_pmadd_wd: 3479 handleVectorPmaddIntrinsic(I, 16); 3480 break; 3481 3482 case Intrinsic::x86_sse_cmp_ss: 3483 case Intrinsic::x86_sse2_cmp_sd: 3484 case Intrinsic::x86_sse_comieq_ss: 3485 case Intrinsic::x86_sse_comilt_ss: 3486 case Intrinsic::x86_sse_comile_ss: 3487 case Intrinsic::x86_sse_comigt_ss: 3488 case Intrinsic::x86_sse_comige_ss: 3489 case Intrinsic::x86_sse_comineq_ss: 3490 case Intrinsic::x86_sse_ucomieq_ss: 3491 case Intrinsic::x86_sse_ucomilt_ss: 3492 case Intrinsic::x86_sse_ucomile_ss: 3493 case Intrinsic::x86_sse_ucomigt_ss: 3494 case Intrinsic::x86_sse_ucomige_ss: 3495 case Intrinsic::x86_sse_ucomineq_ss: 3496 case Intrinsic::x86_sse2_comieq_sd: 3497 case Intrinsic::x86_sse2_comilt_sd: 3498 case Intrinsic::x86_sse2_comile_sd: 3499 case Intrinsic::x86_sse2_comigt_sd: 3500 case Intrinsic::x86_sse2_comige_sd: 3501 case Intrinsic::x86_sse2_comineq_sd: 3502 case Intrinsic::x86_sse2_ucomieq_sd: 3503 case Intrinsic::x86_sse2_ucomilt_sd: 3504 case Intrinsic::x86_sse2_ucomile_sd: 3505 case Intrinsic::x86_sse2_ucomigt_sd: 3506 case Intrinsic::x86_sse2_ucomige_sd: 3507 case Intrinsic::x86_sse2_ucomineq_sd: 3508 handleVectorCompareScalarIntrinsic(I); 3509 break; 3510 3511 case Intrinsic::x86_sse_cmp_ps: 3512 case Intrinsic::x86_sse2_cmp_pd: 3513 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function 3514 // generates reasonably looking IR that fails in the backend with "Do not 3515 // know how to split the result of this operator!". 3516 handleVectorComparePackedIntrinsic(I); 3517 break; 3518 3519 case Intrinsic::x86_bmi_bextr_32: 3520 case Intrinsic::x86_bmi_bextr_64: 3521 case Intrinsic::x86_bmi_bzhi_32: 3522 case Intrinsic::x86_bmi_bzhi_64: 3523 case Intrinsic::x86_bmi_pdep_32: 3524 case Intrinsic::x86_bmi_pdep_64: 3525 case Intrinsic::x86_bmi_pext_32: 3526 case Intrinsic::x86_bmi_pext_64: 3527 handleBmiIntrinsic(I); 3528 break; 3529 3530 case Intrinsic::x86_pclmulqdq: 3531 case Intrinsic::x86_pclmulqdq_256: 3532 case Intrinsic::x86_pclmulqdq_512: 3533 handlePclmulIntrinsic(I); 3534 break; 3535 3536 case Intrinsic::x86_sse41_round_sd: 3537 handleUnarySdIntrinsic(I); 3538 break; 3539 case Intrinsic::x86_sse2_max_sd: 3540 case Intrinsic::x86_sse2_min_sd: 3541 handleBinarySdIntrinsic(I); 3542 break; 3543 3544 case Intrinsic::fshl: 3545 case Intrinsic::fshr: 3546 handleFunnelShift(I); 3547 break; 3548 3549 case Intrinsic::is_constant: 3550 // The result of llvm.is.constant() is always defined. 3551 setShadow(&I, getCleanShadow(&I)); 3552 setOrigin(&I, getCleanOrigin()); 3553 break; 3554 3555 default: 3556 if (!handleUnknownIntrinsic(I)) 3557 visitInstruction(I); 3558 break; 3559 } 3560 } 3561 3562 void visitLibAtomicLoad(CallBase &CB) { 3563 // Since we use getNextNode here, we can't have CB terminate the BB. 3564 assert(isa<CallInst>(CB)); 3565 3566 IRBuilder<> IRB(&CB); 3567 Value *Size = CB.getArgOperand(0); 3568 Value *SrcPtr = CB.getArgOperand(1); 3569 Value *DstPtr = CB.getArgOperand(2); 3570 Value *Ordering = CB.getArgOperand(3); 3571 // Convert the call to have at least Acquire ordering to make sure 3572 // the shadow operations aren't reordered before it. 3573 Value *NewOrdering = 3574 IRB.CreateExtractElement(makeAddAcquireOrderingTable(IRB), Ordering); 3575 CB.setArgOperand(3, NewOrdering); 3576 3577 IRBuilder<> NextIRB(CB.getNextNode()); 3578 NextIRB.SetCurrentDebugLocation(CB.getDebugLoc()); 3579 3580 Value *SrcShadowPtr, *SrcOriginPtr; 3581 std::tie(SrcShadowPtr, SrcOriginPtr) = 3582 getShadowOriginPtr(SrcPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3583 /*isStore*/ false); 3584 Value *DstShadowPtr = 3585 getShadowOriginPtr(DstPtr, NextIRB, NextIRB.getInt8Ty(), Align(1), 3586 /*isStore*/ true) 3587 .first; 3588 3589 NextIRB.CreateMemCpy(DstShadowPtr, Align(1), SrcShadowPtr, Align(1), Size); 3590 if (MS.TrackOrigins) { 3591 Value *SrcOrigin = NextIRB.CreateAlignedLoad(MS.OriginTy, SrcOriginPtr, 3592 kMinOriginAlignment); 3593 Value *NewOrigin = updateOrigin(SrcOrigin, NextIRB); 3594 NextIRB.CreateCall(MS.MsanSetOriginFn, {DstPtr, Size, NewOrigin}); 3595 } 3596 } 3597 3598 void visitLibAtomicStore(CallBase &CB) { 3599 IRBuilder<> IRB(&CB); 3600 Value *Size = CB.getArgOperand(0); 3601 Value *DstPtr = CB.getArgOperand(2); 3602 Value *Ordering = CB.getArgOperand(3); 3603 // Convert the call to have at least Release ordering to make sure 3604 // the shadow operations aren't reordered after it. 3605 Value *NewOrdering = 3606 IRB.CreateExtractElement(makeAddReleaseOrderingTable(IRB), Ordering); 3607 CB.setArgOperand(3, NewOrdering); 3608 3609 Value *DstShadowPtr = 3610 getShadowOriginPtr(DstPtr, IRB, IRB.getInt8Ty(), Align(1), 3611 /*isStore*/ true) 3612 .first; 3613 3614 // Atomic store always paints clean shadow/origin. See file header. 3615 IRB.CreateMemSet(DstShadowPtr, getCleanShadow(IRB.getInt8Ty()), Size, 3616 Align(1)); 3617 } 3618 3619 void visitCallBase(CallBase &CB) { 3620 assert(!CB.getMetadata("nosanitize")); 3621 if (CB.isInlineAsm()) { 3622 // For inline asm (either a call to asm function, or callbr instruction), 3623 // do the usual thing: check argument shadow and mark all outputs as 3624 // clean. Note that any side effects of the inline asm that are not 3625 // immediately visible in its constraints are not handled. 3626 if (ClHandleAsmConservative && MS.CompileKernel) 3627 visitAsmInstruction(CB); 3628 else 3629 visitInstruction(CB); 3630 return; 3631 } 3632 LibFunc LF; 3633 if (TLI->getLibFunc(CB, LF)) { 3634 // libatomic.a functions need to have special handling because there isn't 3635 // a good way to intercept them or compile the library with 3636 // instrumentation. 3637 switch (LF) { 3638 case LibFunc_atomic_load: 3639 if (!isa<CallInst>(CB)) { 3640 llvm::errs() << "MSAN -- cannot instrument invoke of libatomic load." 3641 "Ignoring!\n"; 3642 break; 3643 } 3644 visitLibAtomicLoad(CB); 3645 return; 3646 case LibFunc_atomic_store: 3647 visitLibAtomicStore(CB); 3648 return; 3649 default: 3650 break; 3651 } 3652 } 3653 3654 if (auto *Call = dyn_cast<CallInst>(&CB)) { 3655 assert(!isa<IntrinsicInst>(Call) && "intrinsics are handled elsewhere"); 3656 3657 // We are going to insert code that relies on the fact that the callee 3658 // will become a non-readonly function after it is instrumented by us. To 3659 // prevent this code from being optimized out, mark that function 3660 // non-readonly in advance. 3661 AttrBuilder B; 3662 B.addAttribute(Attribute::ReadOnly) 3663 .addAttribute(Attribute::ReadNone) 3664 .addAttribute(Attribute::WriteOnly) 3665 .addAttribute(Attribute::ArgMemOnly) 3666 .addAttribute(Attribute::Speculatable); 3667 3668 Call->removeFnAttrs(B); 3669 if (Function *Func = Call->getCalledFunction()) { 3670 Func->removeFnAttrs(B); 3671 } 3672 3673 maybeMarkSanitizerLibraryCallNoBuiltin(Call, TLI); 3674 } 3675 IRBuilder<> IRB(&CB); 3676 bool MayCheckCall = ClEagerChecks; 3677 if (Function *Func = CB.getCalledFunction()) { 3678 // __sanitizer_unaligned_{load,store} functions may be called by users 3679 // and always expects shadows in the TLS. So don't check them. 3680 MayCheckCall &= !Func->getName().startswith("__sanitizer_unaligned_"); 3681 } 3682 3683 unsigned ArgOffset = 0; 3684 LLVM_DEBUG(dbgs() << " CallSite: " << CB << "\n"); 3685 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 3686 ++ArgIt) { 3687 Value *A = *ArgIt; 3688 unsigned i = ArgIt - CB.arg_begin(); 3689 if (!A->getType()->isSized()) { 3690 LLVM_DEBUG(dbgs() << "Arg " << i << " is not sized: " << CB << "\n"); 3691 continue; 3692 } 3693 unsigned Size = 0; 3694 Value *Store = nullptr; 3695 // Compute the Shadow for arg even if it is ByVal, because 3696 // in that case getShadow() will copy the actual arg shadow to 3697 // __msan_param_tls. 3698 Value *ArgShadow = getShadow(A); 3699 Value *ArgShadowBase = getShadowPtrForArgument(A, IRB, ArgOffset); 3700 LLVM_DEBUG(dbgs() << " Arg#" << i << ": " << *A 3701 << " Shadow: " << *ArgShadow << "\n"); 3702 bool ArgIsInitialized = false; 3703 const DataLayout &DL = F.getParent()->getDataLayout(); 3704 3705 bool ByVal = CB.paramHasAttr(i, Attribute::ByVal); 3706 bool NoUndef = CB.paramHasAttr(i, Attribute::NoUndef); 3707 bool EagerCheck = MayCheckCall && !ByVal && NoUndef; 3708 3709 if (EagerCheck) { 3710 insertShadowCheck(A, &CB); 3711 Size = DL.getTypeAllocSize(A->getType()); 3712 } else { 3713 if (ByVal) { 3714 // ByVal requires some special handling as it's too big for a single 3715 // load 3716 assert(A->getType()->isPointerTy() && 3717 "ByVal argument is not a pointer!"); 3718 Size = DL.getTypeAllocSize(CB.getParamByValType(i)); 3719 if (ArgOffset + Size > kParamTLSSize) 3720 break; 3721 const MaybeAlign ParamAlignment(CB.getParamAlign(i)); 3722 MaybeAlign Alignment = llvm::None; 3723 if (ParamAlignment) 3724 Alignment = std::min(*ParamAlignment, kShadowTLSAlignment); 3725 Value *AShadowPtr = 3726 getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), Alignment, 3727 /*isStore*/ false) 3728 .first; 3729 3730 Store = IRB.CreateMemCpy(ArgShadowBase, Alignment, AShadowPtr, 3731 Alignment, Size); 3732 // TODO(glider): need to copy origins. 3733 } else { 3734 // Any other parameters mean we need bit-grained tracking of uninit 3735 // data 3736 Size = DL.getTypeAllocSize(A->getType()); 3737 if (ArgOffset + Size > kParamTLSSize) 3738 break; 3739 Store = IRB.CreateAlignedStore(ArgShadow, ArgShadowBase, 3740 kShadowTLSAlignment); 3741 Constant *Cst = dyn_cast<Constant>(ArgShadow); 3742 if (Cst && Cst->isNullValue()) 3743 ArgIsInitialized = true; 3744 } 3745 if (MS.TrackOrigins && !ArgIsInitialized) 3746 IRB.CreateStore(getOrigin(A), 3747 getOriginPtrForArgument(A, IRB, ArgOffset)); 3748 (void)Store; 3749 assert(Store != nullptr); 3750 LLVM_DEBUG(dbgs() << " Param:" << *Store << "\n"); 3751 } 3752 assert(Size != 0); 3753 ArgOffset += alignTo(Size, kShadowTLSAlignment); 3754 } 3755 LLVM_DEBUG(dbgs() << " done with call args\n"); 3756 3757 FunctionType *FT = CB.getFunctionType(); 3758 if (FT->isVarArg()) { 3759 VAHelper->visitCallBase(CB, IRB); 3760 } 3761 3762 // Now, get the shadow for the RetVal. 3763 if (!CB.getType()->isSized()) 3764 return; 3765 // Don't emit the epilogue for musttail call returns. 3766 if (isa<CallInst>(CB) && cast<CallInst>(CB).isMustTailCall()) 3767 return; 3768 3769 if (MayCheckCall && CB.hasRetAttr(Attribute::NoUndef)) { 3770 setShadow(&CB, getCleanShadow(&CB)); 3771 setOrigin(&CB, getCleanOrigin()); 3772 return; 3773 } 3774 3775 IRBuilder<> IRBBefore(&CB); 3776 // Until we have full dynamic coverage, make sure the retval shadow is 0. 3777 Value *Base = getShadowPtrForRetval(&CB, IRBBefore); 3778 IRBBefore.CreateAlignedStore(getCleanShadow(&CB), Base, 3779 kShadowTLSAlignment); 3780 BasicBlock::iterator NextInsn; 3781 if (isa<CallInst>(CB)) { 3782 NextInsn = ++CB.getIterator(); 3783 assert(NextInsn != CB.getParent()->end()); 3784 } else { 3785 BasicBlock *NormalDest = cast<InvokeInst>(CB).getNormalDest(); 3786 if (!NormalDest->getSinglePredecessor()) { 3787 // FIXME: this case is tricky, so we are just conservative here. 3788 // Perhaps we need to split the edge between this BB and NormalDest, 3789 // but a naive attempt to use SplitEdge leads to a crash. 3790 setShadow(&CB, getCleanShadow(&CB)); 3791 setOrigin(&CB, getCleanOrigin()); 3792 return; 3793 } 3794 // FIXME: NextInsn is likely in a basic block that has not been visited yet. 3795 // Anything inserted there will be instrumented by MSan later! 3796 NextInsn = NormalDest->getFirstInsertionPt(); 3797 assert(NextInsn != NormalDest->end() && 3798 "Could not find insertion point for retval shadow load"); 3799 } 3800 IRBuilder<> IRBAfter(&*NextInsn); 3801 Value *RetvalShadow = IRBAfter.CreateAlignedLoad( 3802 getShadowTy(&CB), getShadowPtrForRetval(&CB, IRBAfter), 3803 kShadowTLSAlignment, "_msret"); 3804 setShadow(&CB, RetvalShadow); 3805 if (MS.TrackOrigins) 3806 setOrigin(&CB, IRBAfter.CreateLoad(MS.OriginTy, 3807 getOriginPtrForRetval(IRBAfter))); 3808 } 3809 3810 bool isAMustTailRetVal(Value *RetVal) { 3811 if (auto *I = dyn_cast<BitCastInst>(RetVal)) { 3812 RetVal = I->getOperand(0); 3813 } 3814 if (auto *I = dyn_cast<CallInst>(RetVal)) { 3815 return I->isMustTailCall(); 3816 } 3817 return false; 3818 } 3819 3820 void visitReturnInst(ReturnInst &I) { 3821 IRBuilder<> IRB(&I); 3822 Value *RetVal = I.getReturnValue(); 3823 if (!RetVal) return; 3824 // Don't emit the epilogue for musttail call returns. 3825 if (isAMustTailRetVal(RetVal)) return; 3826 Value *ShadowPtr = getShadowPtrForRetval(RetVal, IRB); 3827 bool HasNoUndef = 3828 F.hasRetAttribute(Attribute::NoUndef); 3829 bool StoreShadow = !(ClEagerChecks && HasNoUndef); 3830 // FIXME: Consider using SpecialCaseList to specify a list of functions that 3831 // must always return fully initialized values. For now, we hardcode "main". 3832 bool EagerCheck = (ClEagerChecks && HasNoUndef) || (F.getName() == "main"); 3833 3834 Value *Shadow = getShadow(RetVal); 3835 bool StoreOrigin = true; 3836 if (EagerCheck) { 3837 insertShadowCheck(RetVal, &I); 3838 Shadow = getCleanShadow(RetVal); 3839 StoreOrigin = false; 3840 } 3841 3842 // The caller may still expect information passed over TLS if we pass our 3843 // check 3844 if (StoreShadow) { 3845 IRB.CreateAlignedStore(Shadow, ShadowPtr, kShadowTLSAlignment); 3846 if (MS.TrackOrigins && StoreOrigin) 3847 IRB.CreateStore(getOrigin(RetVal), getOriginPtrForRetval(IRB)); 3848 } 3849 } 3850 3851 void visitPHINode(PHINode &I) { 3852 IRBuilder<> IRB(&I); 3853 if (!PropagateShadow) { 3854 setShadow(&I, getCleanShadow(&I)); 3855 setOrigin(&I, getCleanOrigin()); 3856 return; 3857 } 3858 3859 ShadowPHINodes.push_back(&I); 3860 setShadow(&I, IRB.CreatePHI(getShadowTy(&I), I.getNumIncomingValues(), 3861 "_msphi_s")); 3862 if (MS.TrackOrigins) 3863 setOrigin(&I, IRB.CreatePHI(MS.OriginTy, I.getNumIncomingValues(), 3864 "_msphi_o")); 3865 } 3866 3867 Value *getLocalVarDescription(AllocaInst &I) { 3868 SmallString<2048> StackDescriptionStorage; 3869 raw_svector_ostream StackDescription(StackDescriptionStorage); 3870 // We create a string with a description of the stack allocation and 3871 // pass it into __msan_set_alloca_origin. 3872 // It will be printed by the run-time if stack-originated UMR is found. 3873 // The first 4 bytes of the string are set to '----' and will be replaced 3874 // by __msan_va_arg_overflow_size_tls at the first call. 3875 StackDescription << "----" << I.getName() << "@" << F.getName(); 3876 return createPrivateNonConstGlobalForString(*F.getParent(), 3877 StackDescription.str()); 3878 } 3879 3880 void poisonAllocaUserspace(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3881 if (PoisonStack && ClPoisonStackWithCall) { 3882 IRB.CreateCall(MS.MsanPoisonStackFn, 3883 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3884 } else { 3885 Value *ShadowBase, *OriginBase; 3886 std::tie(ShadowBase, OriginBase) = getShadowOriginPtr( 3887 &I, IRB, IRB.getInt8Ty(), Align(1), /*isStore*/ true); 3888 3889 Value *PoisonValue = IRB.getInt8(PoisonStack ? ClPoisonStackPattern : 0); 3890 IRB.CreateMemSet(ShadowBase, PoisonValue, Len, 3891 MaybeAlign(I.getAlignment())); 3892 } 3893 3894 if (PoisonStack && MS.TrackOrigins) { 3895 Value *Descr = getLocalVarDescription(I); 3896 IRB.CreateCall(MS.MsanSetAllocaOrigin4Fn, 3897 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3898 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy()), 3899 IRB.CreatePointerCast(&F, MS.IntptrTy)}); 3900 } 3901 } 3902 3903 void poisonAllocaKmsan(AllocaInst &I, IRBuilder<> &IRB, Value *Len) { 3904 Value *Descr = getLocalVarDescription(I); 3905 if (PoisonStack) { 3906 IRB.CreateCall(MS.MsanPoisonAllocaFn, 3907 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len, 3908 IRB.CreatePointerCast(Descr, IRB.getInt8PtrTy())}); 3909 } else { 3910 IRB.CreateCall(MS.MsanUnpoisonAllocaFn, 3911 {IRB.CreatePointerCast(&I, IRB.getInt8PtrTy()), Len}); 3912 } 3913 } 3914 3915 void instrumentAlloca(AllocaInst &I, Instruction *InsPoint = nullptr) { 3916 if (!InsPoint) 3917 InsPoint = &I; 3918 IRBuilder<> IRB(InsPoint->getNextNode()); 3919 const DataLayout &DL = F.getParent()->getDataLayout(); 3920 uint64_t TypeSize = DL.getTypeAllocSize(I.getAllocatedType()); 3921 Value *Len = ConstantInt::get(MS.IntptrTy, TypeSize); 3922 if (I.isArrayAllocation()) 3923 Len = IRB.CreateMul(Len, I.getArraySize()); 3924 3925 if (MS.CompileKernel) 3926 poisonAllocaKmsan(I, IRB, Len); 3927 else 3928 poisonAllocaUserspace(I, IRB, Len); 3929 } 3930 3931 void visitAllocaInst(AllocaInst &I) { 3932 setShadow(&I, getCleanShadow(&I)); 3933 setOrigin(&I, getCleanOrigin()); 3934 // We'll get to this alloca later unless it's poisoned at the corresponding 3935 // llvm.lifetime.start. 3936 AllocaSet.insert(&I); 3937 } 3938 3939 void visitSelectInst(SelectInst& I) { 3940 IRBuilder<> IRB(&I); 3941 // a = select b, c, d 3942 Value *B = I.getCondition(); 3943 Value *C = I.getTrueValue(); 3944 Value *D = I.getFalseValue(); 3945 Value *Sb = getShadow(B); 3946 Value *Sc = getShadow(C); 3947 Value *Sd = getShadow(D); 3948 3949 // Result shadow if condition shadow is 0. 3950 Value *Sa0 = IRB.CreateSelect(B, Sc, Sd); 3951 Value *Sa1; 3952 if (I.getType()->isAggregateType()) { 3953 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do 3954 // an extra "select". This results in much more compact IR. 3955 // Sa = select Sb, poisoned, (select b, Sc, Sd) 3956 Sa1 = getPoisonedShadow(getShadowTy(I.getType())); 3957 } else { 3958 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ] 3959 // If Sb (condition is poisoned), look for bits in c and d that are equal 3960 // and both unpoisoned. 3961 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd. 3962 3963 // Cast arguments to shadow-compatible type. 3964 C = CreateAppToShadowCast(IRB, C); 3965 D = CreateAppToShadowCast(IRB, D); 3966 3967 // Result shadow if condition shadow is 1. 3968 Sa1 = IRB.CreateOr({IRB.CreateXor(C, D), Sc, Sd}); 3969 } 3970 Value *Sa = IRB.CreateSelect(Sb, Sa1, Sa0, "_msprop_select"); 3971 setShadow(&I, Sa); 3972 if (MS.TrackOrigins) { 3973 // Origins are always i32, so any vector conditions must be flattened. 3974 // FIXME: consider tracking vector origins for app vectors? 3975 if (B->getType()->isVectorTy()) { 3976 Type *FlatTy = getShadowTyNoVec(B->getType()); 3977 B = IRB.CreateICmpNE(IRB.CreateBitCast(B, FlatTy), 3978 ConstantInt::getNullValue(FlatTy)); 3979 Sb = IRB.CreateICmpNE(IRB.CreateBitCast(Sb, FlatTy), 3980 ConstantInt::getNullValue(FlatTy)); 3981 } 3982 // a = select b, c, d 3983 // Oa = Sb ? Ob : (b ? Oc : Od) 3984 setOrigin( 3985 &I, IRB.CreateSelect(Sb, getOrigin(I.getCondition()), 3986 IRB.CreateSelect(B, getOrigin(I.getTrueValue()), 3987 getOrigin(I.getFalseValue())))); 3988 } 3989 } 3990 3991 void visitLandingPadInst(LandingPadInst &I) { 3992 // Do nothing. 3993 // See https://github.com/google/sanitizers/issues/504 3994 setShadow(&I, getCleanShadow(&I)); 3995 setOrigin(&I, getCleanOrigin()); 3996 } 3997 3998 void visitCatchSwitchInst(CatchSwitchInst &I) { 3999 setShadow(&I, getCleanShadow(&I)); 4000 setOrigin(&I, getCleanOrigin()); 4001 } 4002 4003 void visitFuncletPadInst(FuncletPadInst &I) { 4004 setShadow(&I, getCleanShadow(&I)); 4005 setOrigin(&I, getCleanOrigin()); 4006 } 4007 4008 void visitGetElementPtrInst(GetElementPtrInst &I) { 4009 handleShadowOr(I); 4010 } 4011 4012 void visitExtractValueInst(ExtractValueInst &I) { 4013 IRBuilder<> IRB(&I); 4014 Value *Agg = I.getAggregateOperand(); 4015 LLVM_DEBUG(dbgs() << "ExtractValue: " << I << "\n"); 4016 Value *AggShadow = getShadow(Agg); 4017 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4018 Value *ResShadow = IRB.CreateExtractValue(AggShadow, I.getIndices()); 4019 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow << "\n"); 4020 setShadow(&I, ResShadow); 4021 setOriginForNaryOp(I); 4022 } 4023 4024 void visitInsertValueInst(InsertValueInst &I) { 4025 IRBuilder<> IRB(&I); 4026 LLVM_DEBUG(dbgs() << "InsertValue: " << I << "\n"); 4027 Value *AggShadow = getShadow(I.getAggregateOperand()); 4028 Value *InsShadow = getShadow(I.getInsertedValueOperand()); 4029 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow << "\n"); 4030 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow << "\n"); 4031 Value *Res = IRB.CreateInsertValue(AggShadow, InsShadow, I.getIndices()); 4032 LLVM_DEBUG(dbgs() << " Res: " << *Res << "\n"); 4033 setShadow(&I, Res); 4034 setOriginForNaryOp(I); 4035 } 4036 4037 void dumpInst(Instruction &I) { 4038 if (CallInst *CI = dyn_cast<CallInst>(&I)) { 4039 errs() << "ZZZ call " << CI->getCalledFunction()->getName() << "\n"; 4040 } else { 4041 errs() << "ZZZ " << I.getOpcodeName() << "\n"; 4042 } 4043 errs() << "QQQ " << I << "\n"; 4044 } 4045 4046 void visitResumeInst(ResumeInst &I) { 4047 LLVM_DEBUG(dbgs() << "Resume: " << I << "\n"); 4048 // Nothing to do here. 4049 } 4050 4051 void visitCleanupReturnInst(CleanupReturnInst &CRI) { 4052 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI << "\n"); 4053 // Nothing to do here. 4054 } 4055 4056 void visitCatchReturnInst(CatchReturnInst &CRI) { 4057 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI << "\n"); 4058 // Nothing to do here. 4059 } 4060 4061 void instrumentAsmArgument(Value *Operand, Instruction &I, IRBuilder<> &IRB, 4062 const DataLayout &DL, bool isOutput) { 4063 // For each assembly argument, we check its value for being initialized. 4064 // If the argument is a pointer, we assume it points to a single element 4065 // of the corresponding type (or to a 8-byte word, if the type is unsized). 4066 // Each such pointer is instrumented with a call to the runtime library. 4067 Type *OpType = Operand->getType(); 4068 // Check the operand value itself. 4069 insertShadowCheck(Operand, &I); 4070 if (!OpType->isPointerTy() || !isOutput) { 4071 assert(!isOutput); 4072 return; 4073 } 4074 Type *ElType = OpType->getPointerElementType(); 4075 if (!ElType->isSized()) 4076 return; 4077 int Size = DL.getTypeStoreSize(ElType); 4078 Value *Ptr = IRB.CreatePointerCast(Operand, IRB.getInt8PtrTy()); 4079 Value *SizeVal = ConstantInt::get(MS.IntptrTy, Size); 4080 IRB.CreateCall(MS.MsanInstrumentAsmStoreFn, {Ptr, SizeVal}); 4081 } 4082 4083 /// Get the number of output arguments returned by pointers. 4084 int getNumOutputArgs(InlineAsm *IA, CallBase *CB) { 4085 int NumRetOutputs = 0; 4086 int NumOutputs = 0; 4087 Type *RetTy = cast<Value>(CB)->getType(); 4088 if (!RetTy->isVoidTy()) { 4089 // Register outputs are returned via the CallInst return value. 4090 auto *ST = dyn_cast<StructType>(RetTy); 4091 if (ST) 4092 NumRetOutputs = ST->getNumElements(); 4093 else 4094 NumRetOutputs = 1; 4095 } 4096 InlineAsm::ConstraintInfoVector Constraints = IA->ParseConstraints(); 4097 for (const InlineAsm::ConstraintInfo &Info : Constraints) { 4098 switch (Info.Type) { 4099 case InlineAsm::isOutput: 4100 NumOutputs++; 4101 break; 4102 default: 4103 break; 4104 } 4105 } 4106 return NumOutputs - NumRetOutputs; 4107 } 4108 4109 void visitAsmInstruction(Instruction &I) { 4110 // Conservative inline assembly handling: check for poisoned shadow of 4111 // asm() arguments, then unpoison the result and all the memory locations 4112 // pointed to by those arguments. 4113 // An inline asm() statement in C++ contains lists of input and output 4114 // arguments used by the assembly code. These are mapped to operands of the 4115 // CallInst as follows: 4116 // - nR register outputs ("=r) are returned by value in a single structure 4117 // (SSA value of the CallInst); 4118 // - nO other outputs ("=m" and others) are returned by pointer as first 4119 // nO operands of the CallInst; 4120 // - nI inputs ("r", "m" and others) are passed to CallInst as the 4121 // remaining nI operands. 4122 // The total number of asm() arguments in the source is nR+nO+nI, and the 4123 // corresponding CallInst has nO+nI+1 operands (the last operand is the 4124 // function to be called). 4125 const DataLayout &DL = F.getParent()->getDataLayout(); 4126 CallBase *CB = cast<CallBase>(&I); 4127 IRBuilder<> IRB(&I); 4128 InlineAsm *IA = cast<InlineAsm>(CB->getCalledOperand()); 4129 int OutputArgs = getNumOutputArgs(IA, CB); 4130 // The last operand of a CallInst is the function itself. 4131 int NumOperands = CB->getNumOperands() - 1; 4132 4133 // Check input arguments. Doing so before unpoisoning output arguments, so 4134 // that we won't overwrite uninit values before checking them. 4135 for (int i = OutputArgs; i < NumOperands; i++) { 4136 Value *Operand = CB->getOperand(i); 4137 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ false); 4138 } 4139 // Unpoison output arguments. This must happen before the actual InlineAsm 4140 // call, so that the shadow for memory published in the asm() statement 4141 // remains valid. 4142 for (int i = 0; i < OutputArgs; i++) { 4143 Value *Operand = CB->getOperand(i); 4144 instrumentAsmArgument(Operand, I, IRB, DL, /*isOutput*/ true); 4145 } 4146 4147 setShadow(&I, getCleanShadow(&I)); 4148 setOrigin(&I, getCleanOrigin()); 4149 } 4150 4151 void visitFreezeInst(FreezeInst &I) { 4152 // Freeze always returns a fully defined value. 4153 setShadow(&I, getCleanShadow(&I)); 4154 setOrigin(&I, getCleanOrigin()); 4155 } 4156 4157 void visitInstruction(Instruction &I) { 4158 // Everything else: stop propagating and check for poisoned shadow. 4159 if (ClDumpStrictInstructions) 4160 dumpInst(I); 4161 LLVM_DEBUG(dbgs() << "DEFAULT: " << I << "\n"); 4162 for (size_t i = 0, n = I.getNumOperands(); i < n; i++) { 4163 Value *Operand = I.getOperand(i); 4164 if (Operand->getType()->isSized()) 4165 insertShadowCheck(Operand, &I); 4166 } 4167 setShadow(&I, getCleanShadow(&I)); 4168 setOrigin(&I, getCleanOrigin()); 4169 } 4170 }; 4171 4172 /// AMD64-specific implementation of VarArgHelper. 4173 struct VarArgAMD64Helper : public VarArgHelper { 4174 // An unfortunate workaround for asymmetric lowering of va_arg stuff. 4175 // See a comment in visitCallBase for more details. 4176 static const unsigned AMD64GpEndOffset = 48; // AMD64 ABI Draft 0.99.6 p3.5.7 4177 static const unsigned AMD64FpEndOffsetSSE = 176; 4178 // If SSE is disabled, fp_offset in va_list is zero. 4179 static const unsigned AMD64FpEndOffsetNoSSE = AMD64GpEndOffset; 4180 4181 unsigned AMD64FpEndOffset; 4182 Function &F; 4183 MemorySanitizer &MS; 4184 MemorySanitizerVisitor &MSV; 4185 Value *VAArgTLSCopy = nullptr; 4186 Value *VAArgTLSOriginCopy = nullptr; 4187 Value *VAArgOverflowSize = nullptr; 4188 4189 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4190 4191 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4192 4193 VarArgAMD64Helper(Function &F, MemorySanitizer &MS, 4194 MemorySanitizerVisitor &MSV) 4195 : F(F), MS(MS), MSV(MSV) { 4196 AMD64FpEndOffset = AMD64FpEndOffsetSSE; 4197 for (const auto &Attr : F.getAttributes().getFnAttrs()) { 4198 if (Attr.isStringAttribute() && 4199 (Attr.getKindAsString() == "target-features")) { 4200 if (Attr.getValueAsString().contains("-sse")) 4201 AMD64FpEndOffset = AMD64FpEndOffsetNoSSE; 4202 break; 4203 } 4204 } 4205 } 4206 4207 ArgKind classifyArgument(Value* arg) { 4208 // A very rough approximation of X86_64 argument classification rules. 4209 Type *T = arg->getType(); 4210 if (T->isFPOrFPVectorTy() || T->isX86_MMXTy()) 4211 return AK_FloatingPoint; 4212 if (T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4213 return AK_GeneralPurpose; 4214 if (T->isPointerTy()) 4215 return AK_GeneralPurpose; 4216 return AK_Memory; 4217 } 4218 4219 // For VarArg functions, store the argument shadow in an ABI-specific format 4220 // that corresponds to va_list layout. 4221 // We do this because Clang lowers va_arg in the frontend, and this pass 4222 // only sees the low level code that deals with va_list internals. 4223 // A much easier alternative (provided that Clang emits va_arg instructions) 4224 // would have been to associate each live instance of va_list with a copy of 4225 // MSanParamTLS, and extract shadow on va_arg() call in the argument list 4226 // order. 4227 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4228 unsigned GpOffset = 0; 4229 unsigned FpOffset = AMD64GpEndOffset; 4230 unsigned OverflowOffset = AMD64FpEndOffset; 4231 const DataLayout &DL = F.getParent()->getDataLayout(); 4232 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4233 ++ArgIt) { 4234 Value *A = *ArgIt; 4235 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4236 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4237 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4238 if (IsByVal) { 4239 // ByVal arguments always go to the overflow area. 4240 // Fixed arguments passed through the overflow area will be stepped 4241 // over by va_start, so don't count them towards the offset. 4242 if (IsFixed) 4243 continue; 4244 assert(A->getType()->isPointerTy()); 4245 Type *RealTy = CB.getParamByValType(ArgNo); 4246 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4247 Value *ShadowBase = getShadowPtrForVAArgument( 4248 RealTy, IRB, OverflowOffset, alignTo(ArgSize, 8)); 4249 Value *OriginBase = nullptr; 4250 if (MS.TrackOrigins) 4251 OriginBase = getOriginPtrForVAArgument(RealTy, IRB, OverflowOffset); 4252 OverflowOffset += alignTo(ArgSize, 8); 4253 if (!ShadowBase) 4254 continue; 4255 Value *ShadowPtr, *OriginPtr; 4256 std::tie(ShadowPtr, OriginPtr) = 4257 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), kShadowTLSAlignment, 4258 /*isStore*/ false); 4259 4260 IRB.CreateMemCpy(ShadowBase, kShadowTLSAlignment, ShadowPtr, 4261 kShadowTLSAlignment, ArgSize); 4262 if (MS.TrackOrigins) 4263 IRB.CreateMemCpy(OriginBase, kShadowTLSAlignment, OriginPtr, 4264 kShadowTLSAlignment, ArgSize); 4265 } else { 4266 ArgKind AK = classifyArgument(A); 4267 if (AK == AK_GeneralPurpose && GpOffset >= AMD64GpEndOffset) 4268 AK = AK_Memory; 4269 if (AK == AK_FloatingPoint && FpOffset >= AMD64FpEndOffset) 4270 AK = AK_Memory; 4271 Value *ShadowBase, *OriginBase = nullptr; 4272 switch (AK) { 4273 case AK_GeneralPurpose: 4274 ShadowBase = 4275 getShadowPtrForVAArgument(A->getType(), IRB, GpOffset, 8); 4276 if (MS.TrackOrigins) 4277 OriginBase = 4278 getOriginPtrForVAArgument(A->getType(), IRB, GpOffset); 4279 GpOffset += 8; 4280 break; 4281 case AK_FloatingPoint: 4282 ShadowBase = 4283 getShadowPtrForVAArgument(A->getType(), IRB, FpOffset, 16); 4284 if (MS.TrackOrigins) 4285 OriginBase = 4286 getOriginPtrForVAArgument(A->getType(), IRB, FpOffset); 4287 FpOffset += 16; 4288 break; 4289 case AK_Memory: 4290 if (IsFixed) 4291 continue; 4292 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4293 ShadowBase = 4294 getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 8); 4295 if (MS.TrackOrigins) 4296 OriginBase = 4297 getOriginPtrForVAArgument(A->getType(), IRB, OverflowOffset); 4298 OverflowOffset += alignTo(ArgSize, 8); 4299 } 4300 // Take fixed arguments into account for GpOffset and FpOffset, 4301 // but don't actually store shadows for them. 4302 // TODO(glider): don't call get*PtrForVAArgument() for them. 4303 if (IsFixed) 4304 continue; 4305 if (!ShadowBase) 4306 continue; 4307 Value *Shadow = MSV.getShadow(A); 4308 IRB.CreateAlignedStore(Shadow, ShadowBase, kShadowTLSAlignment); 4309 if (MS.TrackOrigins) { 4310 Value *Origin = MSV.getOrigin(A); 4311 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 4312 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 4313 std::max(kShadowTLSAlignment, kMinOriginAlignment)); 4314 } 4315 } 4316 } 4317 Constant *OverflowSize = 4318 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AMD64FpEndOffset); 4319 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4320 } 4321 4322 /// Compute the shadow address for a given va_arg. 4323 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4324 unsigned ArgOffset, unsigned ArgSize) { 4325 // Make sure we don't overflow __msan_va_arg_tls. 4326 if (ArgOffset + ArgSize > kParamTLSSize) 4327 return nullptr; 4328 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4329 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4330 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4331 "_msarg_va_s"); 4332 } 4333 4334 /// Compute the origin address for a given va_arg. 4335 Value *getOriginPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, int ArgOffset) { 4336 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 4337 // getOriginPtrForVAArgument() is always called after 4338 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never 4339 // overflow. 4340 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4341 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 4342 "_msarg_va_o"); 4343 } 4344 4345 void unpoisonVAListTagForInst(IntrinsicInst &I) { 4346 IRBuilder<> IRB(&I); 4347 Value *VAListTag = I.getArgOperand(0); 4348 Value *ShadowPtr, *OriginPtr; 4349 const Align Alignment = Align(8); 4350 std::tie(ShadowPtr, OriginPtr) = 4351 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 4352 /*isStore*/ true); 4353 4354 // Unpoison the whole __va_list_tag. 4355 // FIXME: magic ABI constants. 4356 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4357 /* size */ 24, Alignment, false); 4358 // We shouldn't need to zero out the origins, as they're only checked for 4359 // nonzero shadow. 4360 } 4361 4362 void visitVAStartInst(VAStartInst &I) override { 4363 if (F.getCallingConv() == CallingConv::Win64) 4364 return; 4365 VAStartInstrumentationList.push_back(&I); 4366 unpoisonVAListTagForInst(I); 4367 } 4368 4369 void visitVACopyInst(VACopyInst &I) override { 4370 if (F.getCallingConv() == CallingConv::Win64) return; 4371 unpoisonVAListTagForInst(I); 4372 } 4373 4374 void finalizeInstrumentation() override { 4375 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4376 "finalizeInstrumentation called twice"); 4377 if (!VAStartInstrumentationList.empty()) { 4378 // If there is a va_start in this function, make a backup copy of 4379 // va_arg_tls somewhere in the function entry block. 4380 IRBuilder<> IRB(MSV.FnPrologueEnd); 4381 VAArgOverflowSize = 4382 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4383 Value *CopySize = 4384 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AMD64FpEndOffset), 4385 VAArgOverflowSize); 4386 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4387 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4388 if (MS.TrackOrigins) { 4389 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4390 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 4391 Align(8), CopySize); 4392 } 4393 } 4394 4395 // Instrument va_start. 4396 // Copy va_list shadow from the backup copy of the TLS contents. 4397 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4398 CallInst *OrigInst = VAStartInstrumentationList[i]; 4399 IRBuilder<> IRB(OrigInst->getNextNode()); 4400 Value *VAListTag = OrigInst->getArgOperand(0); 4401 4402 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4403 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 4404 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4405 ConstantInt::get(MS.IntptrTy, 16)), 4406 PointerType::get(RegSaveAreaPtrTy, 0)); 4407 Value *RegSaveAreaPtr = 4408 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4409 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4410 const Align Alignment = Align(16); 4411 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4412 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4413 Alignment, /*isStore*/ true); 4414 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4415 AMD64FpEndOffset); 4416 if (MS.TrackOrigins) 4417 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 4418 Alignment, AMD64FpEndOffset); 4419 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4420 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 4421 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4422 ConstantInt::get(MS.IntptrTy, 8)), 4423 PointerType::get(OverflowArgAreaPtrTy, 0)); 4424 Value *OverflowArgAreaPtr = 4425 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 4426 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 4427 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 4428 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 4429 Alignment, /*isStore*/ true); 4430 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 4431 AMD64FpEndOffset); 4432 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 4433 VAArgOverflowSize); 4434 if (MS.TrackOrigins) { 4435 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 4436 AMD64FpEndOffset); 4437 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 4438 VAArgOverflowSize); 4439 } 4440 } 4441 } 4442 }; 4443 4444 /// MIPS64-specific implementation of VarArgHelper. 4445 struct VarArgMIPS64Helper : public VarArgHelper { 4446 Function &F; 4447 MemorySanitizer &MS; 4448 MemorySanitizerVisitor &MSV; 4449 Value *VAArgTLSCopy = nullptr; 4450 Value *VAArgSize = nullptr; 4451 4452 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4453 4454 VarArgMIPS64Helper(Function &F, MemorySanitizer &MS, 4455 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4456 4457 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4458 unsigned VAArgOffset = 0; 4459 const DataLayout &DL = F.getParent()->getDataLayout(); 4460 for (auto ArgIt = CB.arg_begin() + CB.getFunctionType()->getNumParams(), 4461 End = CB.arg_end(); 4462 ArgIt != End; ++ArgIt) { 4463 Triple TargetTriple(F.getParent()->getTargetTriple()); 4464 Value *A = *ArgIt; 4465 Value *Base; 4466 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4467 if (TargetTriple.getArch() == Triple::mips64) { 4468 // Adjusting the shadow for argument with size < 8 to match the placement 4469 // of bits in big endian system 4470 if (ArgSize < 8) 4471 VAArgOffset += (8 - ArgSize); 4472 } 4473 Base = getShadowPtrForVAArgument(A->getType(), IRB, VAArgOffset, ArgSize); 4474 VAArgOffset += ArgSize; 4475 VAArgOffset = alignTo(VAArgOffset, 8); 4476 if (!Base) 4477 continue; 4478 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4479 } 4480 4481 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), VAArgOffset); 4482 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4483 // a new class member i.e. it is the total size of all VarArgs. 4484 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4485 } 4486 4487 /// Compute the shadow address for a given va_arg. 4488 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4489 unsigned ArgOffset, unsigned ArgSize) { 4490 // Make sure we don't overflow __msan_va_arg_tls. 4491 if (ArgOffset + ArgSize > kParamTLSSize) 4492 return nullptr; 4493 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4494 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4495 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4496 "_msarg"); 4497 } 4498 4499 void visitVAStartInst(VAStartInst &I) override { 4500 IRBuilder<> IRB(&I); 4501 VAStartInstrumentationList.push_back(&I); 4502 Value *VAListTag = I.getArgOperand(0); 4503 Value *ShadowPtr, *OriginPtr; 4504 const Align Alignment = Align(8); 4505 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4506 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4507 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4508 /* size */ 8, Alignment, false); 4509 } 4510 4511 void visitVACopyInst(VACopyInst &I) override { 4512 IRBuilder<> IRB(&I); 4513 VAStartInstrumentationList.push_back(&I); 4514 Value *VAListTag = I.getArgOperand(0); 4515 Value *ShadowPtr, *OriginPtr; 4516 const Align Alignment = Align(8); 4517 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4518 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4519 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4520 /* size */ 8, Alignment, false); 4521 } 4522 4523 void finalizeInstrumentation() override { 4524 assert(!VAArgSize && !VAArgTLSCopy && 4525 "finalizeInstrumentation called twice"); 4526 IRBuilder<> IRB(MSV.FnPrologueEnd); 4527 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4528 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4529 VAArgSize); 4530 4531 if (!VAStartInstrumentationList.empty()) { 4532 // If there is a va_start in this function, make a backup copy of 4533 // va_arg_tls somewhere in the function entry block. 4534 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4535 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4536 } 4537 4538 // Instrument va_start. 4539 // Copy va_list shadow from the backup copy of the TLS contents. 4540 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4541 CallInst *OrigInst = VAStartInstrumentationList[i]; 4542 IRBuilder<> IRB(OrigInst->getNextNode()); 4543 Value *VAListTag = OrigInst->getArgOperand(0); 4544 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4545 Value *RegSaveAreaPtrPtr = 4546 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4547 PointerType::get(RegSaveAreaPtrTy, 0)); 4548 Value *RegSaveAreaPtr = 4549 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4550 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4551 const Align Alignment = Align(8); 4552 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4553 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4554 Alignment, /*isStore*/ true); 4555 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4556 CopySize); 4557 } 4558 } 4559 }; 4560 4561 /// AArch64-specific implementation of VarArgHelper. 4562 struct VarArgAArch64Helper : public VarArgHelper { 4563 static const unsigned kAArch64GrArgSize = 64; 4564 static const unsigned kAArch64VrArgSize = 128; 4565 4566 static const unsigned AArch64GrBegOffset = 0; 4567 static const unsigned AArch64GrEndOffset = kAArch64GrArgSize; 4568 // Make VR space aligned to 16 bytes. 4569 static const unsigned AArch64VrBegOffset = AArch64GrEndOffset; 4570 static const unsigned AArch64VrEndOffset = AArch64VrBegOffset 4571 + kAArch64VrArgSize; 4572 static const unsigned AArch64VAEndOffset = AArch64VrEndOffset; 4573 4574 Function &F; 4575 MemorySanitizer &MS; 4576 MemorySanitizerVisitor &MSV; 4577 Value *VAArgTLSCopy = nullptr; 4578 Value *VAArgOverflowSize = nullptr; 4579 4580 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4581 4582 enum ArgKind { AK_GeneralPurpose, AK_FloatingPoint, AK_Memory }; 4583 4584 VarArgAArch64Helper(Function &F, MemorySanitizer &MS, 4585 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4586 4587 ArgKind classifyArgument(Value* arg) { 4588 Type *T = arg->getType(); 4589 if (T->isFPOrFPVectorTy()) 4590 return AK_FloatingPoint; 4591 if ((T->isIntegerTy() && T->getPrimitiveSizeInBits() <= 64) 4592 || (T->isPointerTy())) 4593 return AK_GeneralPurpose; 4594 return AK_Memory; 4595 } 4596 4597 // The instrumentation stores the argument shadow in a non ABI-specific 4598 // format because it does not know which argument is named (since Clang, 4599 // like x86_64 case, lowers the va_args in the frontend and this pass only 4600 // sees the low level code that deals with va_list internals). 4601 // The first seven GR registers are saved in the first 56 bytes of the 4602 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then 4603 // the remaining arguments. 4604 // Using constant offset within the va_arg TLS array allows fast copy 4605 // in the finalize instrumentation. 4606 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4607 unsigned GrOffset = AArch64GrBegOffset; 4608 unsigned VrOffset = AArch64VrBegOffset; 4609 unsigned OverflowOffset = AArch64VAEndOffset; 4610 4611 const DataLayout &DL = F.getParent()->getDataLayout(); 4612 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4613 ++ArgIt) { 4614 Value *A = *ArgIt; 4615 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4616 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4617 ArgKind AK = classifyArgument(A); 4618 if (AK == AK_GeneralPurpose && GrOffset >= AArch64GrEndOffset) 4619 AK = AK_Memory; 4620 if (AK == AK_FloatingPoint && VrOffset >= AArch64VrEndOffset) 4621 AK = AK_Memory; 4622 Value *Base; 4623 switch (AK) { 4624 case AK_GeneralPurpose: 4625 Base = getShadowPtrForVAArgument(A->getType(), IRB, GrOffset, 8); 4626 GrOffset += 8; 4627 break; 4628 case AK_FloatingPoint: 4629 Base = getShadowPtrForVAArgument(A->getType(), IRB, VrOffset, 8); 4630 VrOffset += 16; 4631 break; 4632 case AK_Memory: 4633 // Don't count fixed arguments in the overflow area - va_start will 4634 // skip right over them. 4635 if (IsFixed) 4636 continue; 4637 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4638 Base = getShadowPtrForVAArgument(A->getType(), IRB, OverflowOffset, 4639 alignTo(ArgSize, 8)); 4640 OverflowOffset += alignTo(ArgSize, 8); 4641 break; 4642 } 4643 // Count Gp/Vr fixed arguments to their respective offsets, but don't 4644 // bother to actually store a shadow. 4645 if (IsFixed) 4646 continue; 4647 if (!Base) 4648 continue; 4649 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4650 } 4651 Constant *OverflowSize = 4652 ConstantInt::get(IRB.getInt64Ty(), OverflowOffset - AArch64VAEndOffset); 4653 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 4654 } 4655 4656 /// Compute the shadow address for a given va_arg. 4657 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4658 unsigned ArgOffset, unsigned ArgSize) { 4659 // Make sure we don't overflow __msan_va_arg_tls. 4660 if (ArgOffset + ArgSize > kParamTLSSize) 4661 return nullptr; 4662 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4663 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4664 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4665 "_msarg"); 4666 } 4667 4668 void visitVAStartInst(VAStartInst &I) override { 4669 IRBuilder<> IRB(&I); 4670 VAStartInstrumentationList.push_back(&I); 4671 Value *VAListTag = I.getArgOperand(0); 4672 Value *ShadowPtr, *OriginPtr; 4673 const Align Alignment = Align(8); 4674 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4675 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4676 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4677 /* size */ 32, Alignment, false); 4678 } 4679 4680 void visitVACopyInst(VACopyInst &I) override { 4681 IRBuilder<> IRB(&I); 4682 VAStartInstrumentationList.push_back(&I); 4683 Value *VAListTag = I.getArgOperand(0); 4684 Value *ShadowPtr, *OriginPtr; 4685 const Align Alignment = Align(8); 4686 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4687 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4688 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4689 /* size */ 32, Alignment, false); 4690 } 4691 4692 // Retrieve a va_list field of 'void*' size. 4693 Value* getVAField64(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4694 Value *SaveAreaPtrPtr = 4695 IRB.CreateIntToPtr( 4696 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4697 ConstantInt::get(MS.IntptrTy, offset)), 4698 Type::getInt64PtrTy(*MS.C)); 4699 return IRB.CreateLoad(Type::getInt64Ty(*MS.C), SaveAreaPtrPtr); 4700 } 4701 4702 // Retrieve a va_list field of 'int' size. 4703 Value* getVAField32(IRBuilder<> &IRB, Value *VAListTag, int offset) { 4704 Value *SaveAreaPtr = 4705 IRB.CreateIntToPtr( 4706 IRB.CreateAdd(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4707 ConstantInt::get(MS.IntptrTy, offset)), 4708 Type::getInt32PtrTy(*MS.C)); 4709 Value *SaveArea32 = IRB.CreateLoad(IRB.getInt32Ty(), SaveAreaPtr); 4710 return IRB.CreateSExt(SaveArea32, MS.IntptrTy); 4711 } 4712 4713 void finalizeInstrumentation() override { 4714 assert(!VAArgOverflowSize && !VAArgTLSCopy && 4715 "finalizeInstrumentation called twice"); 4716 if (!VAStartInstrumentationList.empty()) { 4717 // If there is a va_start in this function, make a backup copy of 4718 // va_arg_tls somewhere in the function entry block. 4719 IRBuilder<> IRB(MSV.FnPrologueEnd); 4720 VAArgOverflowSize = 4721 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4722 Value *CopySize = 4723 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, AArch64VAEndOffset), 4724 VAArgOverflowSize); 4725 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4726 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4727 } 4728 4729 Value *GrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64GrArgSize); 4730 Value *VrArgSize = ConstantInt::get(MS.IntptrTy, kAArch64VrArgSize); 4731 4732 // Instrument va_start, copy va_list shadow from the backup copy of 4733 // the TLS contents. 4734 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4735 CallInst *OrigInst = VAStartInstrumentationList[i]; 4736 IRBuilder<> IRB(OrigInst->getNextNode()); 4737 4738 Value *VAListTag = OrigInst->getArgOperand(0); 4739 4740 // The variadic ABI for AArch64 creates two areas to save the incoming 4741 // argument registers (one for 64-bit general register xn-x7 and another 4742 // for 128-bit FP/SIMD vn-v7). 4743 // We need then to propagate the shadow arguments on both regions 4744 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'. 4745 // The remaining arguments are saved on shadow for 'va::stack'. 4746 // One caveat is it requires only to propagate the non-named arguments, 4747 // however on the call site instrumentation 'all' the arguments are 4748 // saved. So to copy the shadow values from the va_arg TLS array 4749 // we need to adjust the offset for both GR and VR fields based on 4750 // the __{gr,vr}_offs value (since they are stores based on incoming 4751 // named arguments). 4752 4753 // Read the stack pointer from the va_list. 4754 Value *StackSaveAreaPtr = getVAField64(IRB, VAListTag, 0); 4755 4756 // Read both the __gr_top and __gr_off and add them up. 4757 Value *GrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 8); 4758 Value *GrOffSaveArea = getVAField32(IRB, VAListTag, 24); 4759 4760 Value *GrRegSaveAreaPtr = IRB.CreateAdd(GrTopSaveAreaPtr, GrOffSaveArea); 4761 4762 // Read both the __vr_top and __vr_off and add them up. 4763 Value *VrTopSaveAreaPtr = getVAField64(IRB, VAListTag, 16); 4764 Value *VrOffSaveArea = getVAField32(IRB, VAListTag, 28); 4765 4766 Value *VrRegSaveAreaPtr = IRB.CreateAdd(VrTopSaveAreaPtr, VrOffSaveArea); 4767 4768 // It does not know how many named arguments is being used and, on the 4769 // callsite all the arguments were saved. Since __gr_off is defined as 4770 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic 4771 // argument by ignoring the bytes of shadow from named arguments. 4772 Value *GrRegSaveAreaShadowPtrOff = 4773 IRB.CreateAdd(GrArgSize, GrOffSaveArea); 4774 4775 Value *GrRegSaveAreaShadowPtr = 4776 MSV.getShadowOriginPtr(GrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4777 Align(8), /*isStore*/ true) 4778 .first; 4779 4780 Value *GrSrcPtr = IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4781 GrRegSaveAreaShadowPtrOff); 4782 Value *GrCopySize = IRB.CreateSub(GrArgSize, GrRegSaveAreaShadowPtrOff); 4783 4784 IRB.CreateMemCpy(GrRegSaveAreaShadowPtr, Align(8), GrSrcPtr, Align(8), 4785 GrCopySize); 4786 4787 // Again, but for FP/SIMD values. 4788 Value *VrRegSaveAreaShadowPtrOff = 4789 IRB.CreateAdd(VrArgSize, VrOffSaveArea); 4790 4791 Value *VrRegSaveAreaShadowPtr = 4792 MSV.getShadowOriginPtr(VrRegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4793 Align(8), /*isStore*/ true) 4794 .first; 4795 4796 Value *VrSrcPtr = IRB.CreateInBoundsGEP( 4797 IRB.getInt8Ty(), 4798 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4799 IRB.getInt32(AArch64VrBegOffset)), 4800 VrRegSaveAreaShadowPtrOff); 4801 Value *VrCopySize = IRB.CreateSub(VrArgSize, VrRegSaveAreaShadowPtrOff); 4802 4803 IRB.CreateMemCpy(VrRegSaveAreaShadowPtr, Align(8), VrSrcPtr, Align(8), 4804 VrCopySize); 4805 4806 // And finally for remaining arguments. 4807 Value *StackSaveAreaShadowPtr = 4808 MSV.getShadowOriginPtr(StackSaveAreaPtr, IRB, IRB.getInt8Ty(), 4809 Align(16), /*isStore*/ true) 4810 .first; 4811 4812 Value *StackSrcPtr = 4813 IRB.CreateInBoundsGEP(IRB.getInt8Ty(), VAArgTLSCopy, 4814 IRB.getInt32(AArch64VAEndOffset)); 4815 4816 IRB.CreateMemCpy(StackSaveAreaShadowPtr, Align(16), StackSrcPtr, 4817 Align(16), VAArgOverflowSize); 4818 } 4819 } 4820 }; 4821 4822 /// PowerPC64-specific implementation of VarArgHelper. 4823 struct VarArgPowerPC64Helper : public VarArgHelper { 4824 Function &F; 4825 MemorySanitizer &MS; 4826 MemorySanitizerVisitor &MSV; 4827 Value *VAArgTLSCopy = nullptr; 4828 Value *VAArgSize = nullptr; 4829 4830 SmallVector<CallInst*, 16> VAStartInstrumentationList; 4831 4832 VarArgPowerPC64Helper(Function &F, MemorySanitizer &MS, 4833 MemorySanitizerVisitor &MSV) : F(F), MS(MS), MSV(MSV) {} 4834 4835 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 4836 // For PowerPC, we need to deal with alignment of stack arguments - 4837 // they are mostly aligned to 8 bytes, but vectors and i128 arrays 4838 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes, 4839 // For that reason, we compute current offset from stack pointer (which is 4840 // always properly aligned), and offset for the first vararg, then subtract 4841 // them. 4842 unsigned VAArgBase; 4843 Triple TargetTriple(F.getParent()->getTargetTriple()); 4844 // Parameter save area starts at 48 bytes from frame pointer for ABIv1, 4845 // and 32 bytes for ABIv2. This is usually determined by target 4846 // endianness, but in theory could be overridden by function attribute. 4847 if (TargetTriple.getArch() == Triple::ppc64) 4848 VAArgBase = 48; 4849 else 4850 VAArgBase = 32; 4851 unsigned VAArgOffset = VAArgBase; 4852 const DataLayout &DL = F.getParent()->getDataLayout(); 4853 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 4854 ++ArgIt) { 4855 Value *A = *ArgIt; 4856 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 4857 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 4858 bool IsByVal = CB.paramHasAttr(ArgNo, Attribute::ByVal); 4859 if (IsByVal) { 4860 assert(A->getType()->isPointerTy()); 4861 Type *RealTy = CB.getParamByValType(ArgNo); 4862 uint64_t ArgSize = DL.getTypeAllocSize(RealTy); 4863 MaybeAlign ArgAlign = CB.getParamAlign(ArgNo); 4864 if (!ArgAlign || *ArgAlign < Align(8)) 4865 ArgAlign = Align(8); 4866 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4867 if (!IsFixed) { 4868 Value *Base = getShadowPtrForVAArgument( 4869 RealTy, IRB, VAArgOffset - VAArgBase, ArgSize); 4870 if (Base) { 4871 Value *AShadowPtr, *AOriginPtr; 4872 std::tie(AShadowPtr, AOriginPtr) = 4873 MSV.getShadowOriginPtr(A, IRB, IRB.getInt8Ty(), 4874 kShadowTLSAlignment, /*isStore*/ false); 4875 4876 IRB.CreateMemCpy(Base, kShadowTLSAlignment, AShadowPtr, 4877 kShadowTLSAlignment, ArgSize); 4878 } 4879 } 4880 VAArgOffset += alignTo(ArgSize, 8); 4881 } else { 4882 Value *Base; 4883 uint64_t ArgSize = DL.getTypeAllocSize(A->getType()); 4884 uint64_t ArgAlign = 8; 4885 if (A->getType()->isArrayTy()) { 4886 // Arrays are aligned to element size, except for long double 4887 // arrays, which are aligned to 8 bytes. 4888 Type *ElementTy = A->getType()->getArrayElementType(); 4889 if (!ElementTy->isPPC_FP128Ty()) 4890 ArgAlign = DL.getTypeAllocSize(ElementTy); 4891 } else if (A->getType()->isVectorTy()) { 4892 // Vectors are naturally aligned. 4893 ArgAlign = DL.getTypeAllocSize(A->getType()); 4894 } 4895 if (ArgAlign < 8) 4896 ArgAlign = 8; 4897 VAArgOffset = alignTo(VAArgOffset, ArgAlign); 4898 if (DL.isBigEndian()) { 4899 // Adjusting the shadow for argument with size < 8 to match the placement 4900 // of bits in big endian system 4901 if (ArgSize < 8) 4902 VAArgOffset += (8 - ArgSize); 4903 } 4904 if (!IsFixed) { 4905 Base = getShadowPtrForVAArgument(A->getType(), IRB, 4906 VAArgOffset - VAArgBase, ArgSize); 4907 if (Base) 4908 IRB.CreateAlignedStore(MSV.getShadow(A), Base, kShadowTLSAlignment); 4909 } 4910 VAArgOffset += ArgSize; 4911 VAArgOffset = alignTo(VAArgOffset, 8); 4912 } 4913 if (IsFixed) 4914 VAArgBase = VAArgOffset; 4915 } 4916 4917 Constant *TotalVAArgSize = ConstantInt::get(IRB.getInt64Ty(), 4918 VAArgOffset - VAArgBase); 4919 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of 4920 // a new class member i.e. it is the total size of all VarArgs. 4921 IRB.CreateStore(TotalVAArgSize, MS.VAArgOverflowSizeTLS); 4922 } 4923 4924 /// Compute the shadow address for a given va_arg. 4925 Value *getShadowPtrForVAArgument(Type *Ty, IRBuilder<> &IRB, 4926 unsigned ArgOffset, unsigned ArgSize) { 4927 // Make sure we don't overflow __msan_va_arg_tls. 4928 if (ArgOffset + ArgSize > kParamTLSSize) 4929 return nullptr; 4930 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 4931 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 4932 return IRB.CreateIntToPtr(Base, PointerType::get(MSV.getShadowTy(Ty), 0), 4933 "_msarg"); 4934 } 4935 4936 void visitVAStartInst(VAStartInst &I) override { 4937 IRBuilder<> IRB(&I); 4938 VAStartInstrumentationList.push_back(&I); 4939 Value *VAListTag = I.getArgOperand(0); 4940 Value *ShadowPtr, *OriginPtr; 4941 const Align Alignment = Align(8); 4942 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4943 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4944 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4945 /* size */ 8, Alignment, false); 4946 } 4947 4948 void visitVACopyInst(VACopyInst &I) override { 4949 IRBuilder<> IRB(&I); 4950 Value *VAListTag = I.getArgOperand(0); 4951 Value *ShadowPtr, *OriginPtr; 4952 const Align Alignment = Align(8); 4953 std::tie(ShadowPtr, OriginPtr) = MSV.getShadowOriginPtr( 4954 VAListTag, IRB, IRB.getInt8Ty(), Alignment, /*isStore*/ true); 4955 // Unpoison the whole __va_list_tag. 4956 // FIXME: magic ABI constants. 4957 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 4958 /* size */ 8, Alignment, false); 4959 } 4960 4961 void finalizeInstrumentation() override { 4962 assert(!VAArgSize && !VAArgTLSCopy && 4963 "finalizeInstrumentation called twice"); 4964 IRBuilder<> IRB(MSV.FnPrologueEnd); 4965 VAArgSize = IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 4966 Value *CopySize = IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, 0), 4967 VAArgSize); 4968 4969 if (!VAStartInstrumentationList.empty()) { 4970 // If there is a va_start in this function, make a backup copy of 4971 // va_arg_tls somewhere in the function entry block. 4972 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 4973 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 4974 } 4975 4976 // Instrument va_start. 4977 // Copy va_list shadow from the backup copy of the TLS contents. 4978 for (size_t i = 0, n = VAStartInstrumentationList.size(); i < n; i++) { 4979 CallInst *OrigInst = VAStartInstrumentationList[i]; 4980 IRBuilder<> IRB(OrigInst->getNextNode()); 4981 Value *VAListTag = OrigInst->getArgOperand(0); 4982 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 4983 Value *RegSaveAreaPtrPtr = 4984 IRB.CreateIntToPtr(IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 4985 PointerType::get(RegSaveAreaPtrTy, 0)); 4986 Value *RegSaveAreaPtr = 4987 IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 4988 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 4989 const Align Alignment = Align(8); 4990 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 4991 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), 4992 Alignment, /*isStore*/ true); 4993 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 4994 CopySize); 4995 } 4996 } 4997 }; 4998 4999 /// SystemZ-specific implementation of VarArgHelper. 5000 struct VarArgSystemZHelper : public VarArgHelper { 5001 static const unsigned SystemZGpOffset = 16; 5002 static const unsigned SystemZGpEndOffset = 56; 5003 static const unsigned SystemZFpOffset = 128; 5004 static const unsigned SystemZFpEndOffset = 160; 5005 static const unsigned SystemZMaxVrArgs = 8; 5006 static const unsigned SystemZRegSaveAreaSize = 160; 5007 static const unsigned SystemZOverflowOffset = 160; 5008 static const unsigned SystemZVAListTagSize = 32; 5009 static const unsigned SystemZOverflowArgAreaPtrOffset = 16; 5010 static const unsigned SystemZRegSaveAreaPtrOffset = 24; 5011 5012 Function &F; 5013 MemorySanitizer &MS; 5014 MemorySanitizerVisitor &MSV; 5015 Value *VAArgTLSCopy = nullptr; 5016 Value *VAArgTLSOriginCopy = nullptr; 5017 Value *VAArgOverflowSize = nullptr; 5018 5019 SmallVector<CallInst *, 16> VAStartInstrumentationList; 5020 5021 enum class ArgKind { 5022 GeneralPurpose, 5023 FloatingPoint, 5024 Vector, 5025 Memory, 5026 Indirect, 5027 }; 5028 5029 enum class ShadowExtension { None, Zero, Sign }; 5030 5031 VarArgSystemZHelper(Function &F, MemorySanitizer &MS, 5032 MemorySanitizerVisitor &MSV) 5033 : F(F), MS(MS), MSV(MSV) {} 5034 5035 ArgKind classifyArgument(Type *T, bool IsSoftFloatABI) { 5036 // T is a SystemZABIInfo::classifyArgumentType() output, and there are 5037 // only a few possibilities of what it can be. In particular, enums, single 5038 // element structs and large types have already been taken care of. 5039 5040 // Some i128 and fp128 arguments are converted to pointers only in the 5041 // back end. 5042 if (T->isIntegerTy(128) || T->isFP128Ty()) 5043 return ArgKind::Indirect; 5044 if (T->isFloatingPointTy()) 5045 return IsSoftFloatABI ? ArgKind::GeneralPurpose : ArgKind::FloatingPoint; 5046 if (T->isIntegerTy() || T->isPointerTy()) 5047 return ArgKind::GeneralPurpose; 5048 if (T->isVectorTy()) 5049 return ArgKind::Vector; 5050 return ArgKind::Memory; 5051 } 5052 5053 ShadowExtension getShadowExtension(const CallBase &CB, unsigned ArgNo) { 5054 // ABI says: "One of the simple integer types no more than 64 bits wide. 5055 // ... If such an argument is shorter than 64 bits, replace it by a full 5056 // 64-bit integer representing the same number, using sign or zero 5057 // extension". Shadow for an integer argument has the same type as the 5058 // argument itself, so it can be sign or zero extended as well. 5059 bool ZExt = CB.paramHasAttr(ArgNo, Attribute::ZExt); 5060 bool SExt = CB.paramHasAttr(ArgNo, Attribute::SExt); 5061 if (ZExt) { 5062 assert(!SExt); 5063 return ShadowExtension::Zero; 5064 } 5065 if (SExt) { 5066 assert(!ZExt); 5067 return ShadowExtension::Sign; 5068 } 5069 return ShadowExtension::None; 5070 } 5071 5072 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override { 5073 bool IsSoftFloatABI = CB.getCalledFunction() 5074 ->getFnAttribute("use-soft-float") 5075 .getValueAsBool(); 5076 unsigned GpOffset = SystemZGpOffset; 5077 unsigned FpOffset = SystemZFpOffset; 5078 unsigned VrIndex = 0; 5079 unsigned OverflowOffset = SystemZOverflowOffset; 5080 const DataLayout &DL = F.getParent()->getDataLayout(); 5081 for (auto ArgIt = CB.arg_begin(), End = CB.arg_end(); ArgIt != End; 5082 ++ArgIt) { 5083 Value *A = *ArgIt; 5084 unsigned ArgNo = CB.getArgOperandNo(ArgIt); 5085 bool IsFixed = ArgNo < CB.getFunctionType()->getNumParams(); 5086 // SystemZABIInfo does not produce ByVal parameters. 5087 assert(!CB.paramHasAttr(ArgNo, Attribute::ByVal)); 5088 Type *T = A->getType(); 5089 ArgKind AK = classifyArgument(T, IsSoftFloatABI); 5090 if (AK == ArgKind::Indirect) { 5091 T = PointerType::get(T, 0); 5092 AK = ArgKind::GeneralPurpose; 5093 } 5094 if (AK == ArgKind::GeneralPurpose && GpOffset >= SystemZGpEndOffset) 5095 AK = ArgKind::Memory; 5096 if (AK == ArgKind::FloatingPoint && FpOffset >= SystemZFpEndOffset) 5097 AK = ArgKind::Memory; 5098 if (AK == ArgKind::Vector && (VrIndex >= SystemZMaxVrArgs || !IsFixed)) 5099 AK = ArgKind::Memory; 5100 Value *ShadowBase = nullptr; 5101 Value *OriginBase = nullptr; 5102 ShadowExtension SE = ShadowExtension::None; 5103 switch (AK) { 5104 case ArgKind::GeneralPurpose: { 5105 // Always keep track of GpOffset, but store shadow only for varargs. 5106 uint64_t ArgSize = 8; 5107 if (GpOffset + ArgSize <= kParamTLSSize) { 5108 if (!IsFixed) { 5109 SE = getShadowExtension(CB, ArgNo); 5110 uint64_t GapSize = 0; 5111 if (SE == ShadowExtension::None) { 5112 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5113 assert(ArgAllocSize <= ArgSize); 5114 GapSize = ArgSize - ArgAllocSize; 5115 } 5116 ShadowBase = getShadowAddrForVAArgument(IRB, GpOffset + GapSize); 5117 if (MS.TrackOrigins) 5118 OriginBase = getOriginPtrForVAArgument(IRB, GpOffset + GapSize); 5119 } 5120 GpOffset += ArgSize; 5121 } else { 5122 GpOffset = kParamTLSSize; 5123 } 5124 break; 5125 } 5126 case ArgKind::FloatingPoint: { 5127 // Always keep track of FpOffset, but store shadow only for varargs. 5128 uint64_t ArgSize = 8; 5129 if (FpOffset + ArgSize <= kParamTLSSize) { 5130 if (!IsFixed) { 5131 // PoP says: "A short floating-point datum requires only the 5132 // left-most 32 bit positions of a floating-point register". 5133 // Therefore, in contrast to AK_GeneralPurpose and AK_Memory, 5134 // don't extend shadow and don't mind the gap. 5135 ShadowBase = getShadowAddrForVAArgument(IRB, FpOffset); 5136 if (MS.TrackOrigins) 5137 OriginBase = getOriginPtrForVAArgument(IRB, FpOffset); 5138 } 5139 FpOffset += ArgSize; 5140 } else { 5141 FpOffset = kParamTLSSize; 5142 } 5143 break; 5144 } 5145 case ArgKind::Vector: { 5146 // Keep track of VrIndex. No need to store shadow, since vector varargs 5147 // go through AK_Memory. 5148 assert(IsFixed); 5149 VrIndex++; 5150 break; 5151 } 5152 case ArgKind::Memory: { 5153 // Keep track of OverflowOffset and store shadow only for varargs. 5154 // Ignore fixed args, since we need to copy only the vararg portion of 5155 // the overflow area shadow. 5156 if (!IsFixed) { 5157 uint64_t ArgAllocSize = DL.getTypeAllocSize(T); 5158 uint64_t ArgSize = alignTo(ArgAllocSize, 8); 5159 if (OverflowOffset + ArgSize <= kParamTLSSize) { 5160 SE = getShadowExtension(CB, ArgNo); 5161 uint64_t GapSize = 5162 SE == ShadowExtension::None ? ArgSize - ArgAllocSize : 0; 5163 ShadowBase = 5164 getShadowAddrForVAArgument(IRB, OverflowOffset + GapSize); 5165 if (MS.TrackOrigins) 5166 OriginBase = 5167 getOriginPtrForVAArgument(IRB, OverflowOffset + GapSize); 5168 OverflowOffset += ArgSize; 5169 } else { 5170 OverflowOffset = kParamTLSSize; 5171 } 5172 } 5173 break; 5174 } 5175 case ArgKind::Indirect: 5176 llvm_unreachable("Indirect must be converted to GeneralPurpose"); 5177 } 5178 if (ShadowBase == nullptr) 5179 continue; 5180 Value *Shadow = MSV.getShadow(A); 5181 if (SE != ShadowExtension::None) 5182 Shadow = MSV.CreateShadowCast(IRB, Shadow, IRB.getInt64Ty(), 5183 /*Signed*/ SE == ShadowExtension::Sign); 5184 ShadowBase = IRB.CreateIntToPtr( 5185 ShadowBase, PointerType::get(Shadow->getType(), 0), "_msarg_va_s"); 5186 IRB.CreateStore(Shadow, ShadowBase); 5187 if (MS.TrackOrigins) { 5188 Value *Origin = MSV.getOrigin(A); 5189 unsigned StoreSize = DL.getTypeStoreSize(Shadow->getType()); 5190 MSV.paintOrigin(IRB, Origin, OriginBase, StoreSize, 5191 kMinOriginAlignment); 5192 } 5193 } 5194 Constant *OverflowSize = ConstantInt::get( 5195 IRB.getInt64Ty(), OverflowOffset - SystemZOverflowOffset); 5196 IRB.CreateStore(OverflowSize, MS.VAArgOverflowSizeTLS); 5197 } 5198 5199 Value *getShadowAddrForVAArgument(IRBuilder<> &IRB, unsigned ArgOffset) { 5200 Value *Base = IRB.CreatePointerCast(MS.VAArgTLS, MS.IntptrTy); 5201 return IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5202 } 5203 5204 Value *getOriginPtrForVAArgument(IRBuilder<> &IRB, int ArgOffset) { 5205 Value *Base = IRB.CreatePointerCast(MS.VAArgOriginTLS, MS.IntptrTy); 5206 Base = IRB.CreateAdd(Base, ConstantInt::get(MS.IntptrTy, ArgOffset)); 5207 return IRB.CreateIntToPtr(Base, PointerType::get(MS.OriginTy, 0), 5208 "_msarg_va_o"); 5209 } 5210 5211 void unpoisonVAListTagForInst(IntrinsicInst &I) { 5212 IRBuilder<> IRB(&I); 5213 Value *VAListTag = I.getArgOperand(0); 5214 Value *ShadowPtr, *OriginPtr; 5215 const Align Alignment = Align(8); 5216 std::tie(ShadowPtr, OriginPtr) = 5217 MSV.getShadowOriginPtr(VAListTag, IRB, IRB.getInt8Ty(), Alignment, 5218 /*isStore*/ true); 5219 IRB.CreateMemSet(ShadowPtr, Constant::getNullValue(IRB.getInt8Ty()), 5220 SystemZVAListTagSize, Alignment, false); 5221 } 5222 5223 void visitVAStartInst(VAStartInst &I) override { 5224 VAStartInstrumentationList.push_back(&I); 5225 unpoisonVAListTagForInst(I); 5226 } 5227 5228 void visitVACopyInst(VACopyInst &I) override { unpoisonVAListTagForInst(I); } 5229 5230 void copyRegSaveArea(IRBuilder<> &IRB, Value *VAListTag) { 5231 Type *RegSaveAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5232 Value *RegSaveAreaPtrPtr = IRB.CreateIntToPtr( 5233 IRB.CreateAdd( 5234 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5235 ConstantInt::get(MS.IntptrTy, SystemZRegSaveAreaPtrOffset)), 5236 PointerType::get(RegSaveAreaPtrTy, 0)); 5237 Value *RegSaveAreaPtr = IRB.CreateLoad(RegSaveAreaPtrTy, RegSaveAreaPtrPtr); 5238 Value *RegSaveAreaShadowPtr, *RegSaveAreaOriginPtr; 5239 const Align Alignment = Align(8); 5240 std::tie(RegSaveAreaShadowPtr, RegSaveAreaOriginPtr) = 5241 MSV.getShadowOriginPtr(RegSaveAreaPtr, IRB, IRB.getInt8Ty(), Alignment, 5242 /*isStore*/ true); 5243 // TODO(iii): copy only fragments filled by visitCallBase() 5244 IRB.CreateMemCpy(RegSaveAreaShadowPtr, Alignment, VAArgTLSCopy, Alignment, 5245 SystemZRegSaveAreaSize); 5246 if (MS.TrackOrigins) 5247 IRB.CreateMemCpy(RegSaveAreaOriginPtr, Alignment, VAArgTLSOriginCopy, 5248 Alignment, SystemZRegSaveAreaSize); 5249 } 5250 5251 void copyOverflowArea(IRBuilder<> &IRB, Value *VAListTag) { 5252 Type *OverflowArgAreaPtrTy = Type::getInt64PtrTy(*MS.C); 5253 Value *OverflowArgAreaPtrPtr = IRB.CreateIntToPtr( 5254 IRB.CreateAdd( 5255 IRB.CreatePtrToInt(VAListTag, MS.IntptrTy), 5256 ConstantInt::get(MS.IntptrTy, SystemZOverflowArgAreaPtrOffset)), 5257 PointerType::get(OverflowArgAreaPtrTy, 0)); 5258 Value *OverflowArgAreaPtr = 5259 IRB.CreateLoad(OverflowArgAreaPtrTy, OverflowArgAreaPtrPtr); 5260 Value *OverflowArgAreaShadowPtr, *OverflowArgAreaOriginPtr; 5261 const Align Alignment = Align(8); 5262 std::tie(OverflowArgAreaShadowPtr, OverflowArgAreaOriginPtr) = 5263 MSV.getShadowOriginPtr(OverflowArgAreaPtr, IRB, IRB.getInt8Ty(), 5264 Alignment, /*isStore*/ true); 5265 Value *SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSCopy, 5266 SystemZOverflowOffset); 5267 IRB.CreateMemCpy(OverflowArgAreaShadowPtr, Alignment, SrcPtr, Alignment, 5268 VAArgOverflowSize); 5269 if (MS.TrackOrigins) { 5270 SrcPtr = IRB.CreateConstGEP1_32(IRB.getInt8Ty(), VAArgTLSOriginCopy, 5271 SystemZOverflowOffset); 5272 IRB.CreateMemCpy(OverflowArgAreaOriginPtr, Alignment, SrcPtr, Alignment, 5273 VAArgOverflowSize); 5274 } 5275 } 5276 5277 void finalizeInstrumentation() override { 5278 assert(!VAArgOverflowSize && !VAArgTLSCopy && 5279 "finalizeInstrumentation called twice"); 5280 if (!VAStartInstrumentationList.empty()) { 5281 // If there is a va_start in this function, make a backup copy of 5282 // va_arg_tls somewhere in the function entry block. 5283 IRBuilder<> IRB(MSV.FnPrologueEnd); 5284 VAArgOverflowSize = 5285 IRB.CreateLoad(IRB.getInt64Ty(), MS.VAArgOverflowSizeTLS); 5286 Value *CopySize = 5287 IRB.CreateAdd(ConstantInt::get(MS.IntptrTy, SystemZOverflowOffset), 5288 VAArgOverflowSize); 5289 VAArgTLSCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5290 IRB.CreateMemCpy(VAArgTLSCopy, Align(8), MS.VAArgTLS, Align(8), CopySize); 5291 if (MS.TrackOrigins) { 5292 VAArgTLSOriginCopy = IRB.CreateAlloca(Type::getInt8Ty(*MS.C), CopySize); 5293 IRB.CreateMemCpy(VAArgTLSOriginCopy, Align(8), MS.VAArgOriginTLS, 5294 Align(8), CopySize); 5295 } 5296 } 5297 5298 // Instrument va_start. 5299 // Copy va_list shadow from the backup copy of the TLS contents. 5300 for (size_t VaStartNo = 0, VaStartNum = VAStartInstrumentationList.size(); 5301 VaStartNo < VaStartNum; VaStartNo++) { 5302 CallInst *OrigInst = VAStartInstrumentationList[VaStartNo]; 5303 IRBuilder<> IRB(OrigInst->getNextNode()); 5304 Value *VAListTag = OrigInst->getArgOperand(0); 5305 copyRegSaveArea(IRB, VAListTag); 5306 copyOverflowArea(IRB, VAListTag); 5307 } 5308 } 5309 }; 5310 5311 /// A no-op implementation of VarArgHelper. 5312 struct VarArgNoOpHelper : public VarArgHelper { 5313 VarArgNoOpHelper(Function &F, MemorySanitizer &MS, 5314 MemorySanitizerVisitor &MSV) {} 5315 5316 void visitCallBase(CallBase &CB, IRBuilder<> &IRB) override {} 5317 5318 void visitVAStartInst(VAStartInst &I) override {} 5319 5320 void visitVACopyInst(VACopyInst &I) override {} 5321 5322 void finalizeInstrumentation() override {} 5323 }; 5324 5325 } // end anonymous namespace 5326 5327 static VarArgHelper *CreateVarArgHelper(Function &Func, MemorySanitizer &Msan, 5328 MemorySanitizerVisitor &Visitor) { 5329 // VarArg handling is only implemented on AMD64. False positives are possible 5330 // on other platforms. 5331 Triple TargetTriple(Func.getParent()->getTargetTriple()); 5332 if (TargetTriple.getArch() == Triple::x86_64) 5333 return new VarArgAMD64Helper(Func, Msan, Visitor); 5334 else if (TargetTriple.isMIPS64()) 5335 return new VarArgMIPS64Helper(Func, Msan, Visitor); 5336 else if (TargetTriple.getArch() == Triple::aarch64) 5337 return new VarArgAArch64Helper(Func, Msan, Visitor); 5338 else if (TargetTriple.getArch() == Triple::ppc64 || 5339 TargetTriple.getArch() == Triple::ppc64le) 5340 return new VarArgPowerPC64Helper(Func, Msan, Visitor); 5341 else if (TargetTriple.getArch() == Triple::systemz) 5342 return new VarArgSystemZHelper(Func, Msan, Visitor); 5343 else 5344 return new VarArgNoOpHelper(Func, Msan, Visitor); 5345 } 5346 5347 bool MemorySanitizer::sanitizeFunction(Function &F, TargetLibraryInfo &TLI) { 5348 if (!CompileKernel && F.getName() == kMsanModuleCtorName) 5349 return false; 5350 5351 if (F.hasFnAttribute(Attribute::DisableSanitizerInstrumentation)) 5352 return false; 5353 5354 MemorySanitizerVisitor Visitor(F, *this, TLI); 5355 5356 // Clear out readonly/readnone attributes. 5357 AttrBuilder B; 5358 B.addAttribute(Attribute::ReadOnly) 5359 .addAttribute(Attribute::ReadNone) 5360 .addAttribute(Attribute::WriteOnly) 5361 .addAttribute(Attribute::ArgMemOnly) 5362 .addAttribute(Attribute::Speculatable); 5363 F.removeFnAttrs(B); 5364 5365 return Visitor.runOnFunction(); 5366 } 5367