1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134
|
//===-- asan_shadow_setup.cpp ---------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file is a part of AddressSanitizer, an address sanity checker.
//
// Set up the shadow memory.
//===----------------------------------------------------------------------===//
#include "sanitizer_common/sanitizer_platform.h"
// asan_fuchsia.cpp has their own InitializeShadowMemory implementation.
#if !SANITIZER_FUCHSIA
# include "asan_internal.h"
# include "asan_mapping.h"
namespace __asan {
static void ProtectGap(uptr addr, uptr size) {
if (!flags()->protect_shadow_gap) {
// The shadow gap is unprotected, so there is a chance that someone
// is actually using this memory. Which means it needs a shadow...
uptr GapShadowBeg = RoundDownTo(MEM_TO_SHADOW(addr), GetPageSizeCached());
uptr GapShadowEnd =
RoundUpTo(MEM_TO_SHADOW(addr + size), GetPageSizeCached()) - 1;
if (Verbosity())
Printf(
"protect_shadow_gap=0:"
" not protecting shadow gap, allocating gap's shadow\n"
"|| `[%p, %p]` || ShadowGap's shadow ||\n",
(void*)GapShadowBeg, (void*)GapShadowEnd);
ReserveShadowMemoryRange(GapShadowBeg, GapShadowEnd,
"unprotected gap shadow");
return;
}
__sanitizer::ProtectGap(addr, size, kZeroBaseShadowStart,
kZeroBaseMaxShadowStart);
}
static void MaybeReportLinuxPIEBug() {
#if SANITIZER_LINUX && \
(defined(__x86_64__) || defined(__aarch64__) || SANITIZER_RISCV64)
Report("This might be related to ELF_ET_DYN_BASE change in Linux 4.12.\n");
Report(
"See https://github.com/google/sanitizers/issues/856 for possible "
"workarounds.\n");
#endif
}
void InitializeShadowMemory() {
// Set the shadow memory address to uninitialized.
__asan_shadow_memory_dynamic_address = kDefaultShadowSentinel;
uptr shadow_start = kLowShadowBeg;
// Detect if a dynamic shadow address must used and find a available location
// when necessary. When dynamic address is used, the macro |kLowShadowBeg|
// expands to |__asan_shadow_memory_dynamic_address| which is
// |kDefaultShadowSentinel|.
bool full_shadow_is_available = false;
if (shadow_start == kDefaultShadowSentinel) {
shadow_start = FindDynamicShadowStart();
if (SANITIZER_LINUX) full_shadow_is_available = true;
}
// Update the shadow memory address (potentially) used by instrumentation.
__asan_shadow_memory_dynamic_address = shadow_start;
if (kLowShadowBeg) shadow_start -= GetMmapGranularity();
if (!full_shadow_is_available)
full_shadow_is_available =
MemoryRangeIsAvailable(shadow_start, kHighShadowEnd);
#if SANITIZER_LINUX && defined(__x86_64__) && defined(_LP64) && \
!ASAN_FIXED_MAPPING
if (!full_shadow_is_available) {
kMidMemBeg = kLowMemEnd < 0x3000000000ULL ? 0x3000000000ULL : 0;
kMidMemEnd = kLowMemEnd < 0x3000000000ULL ? 0x4fffffffffULL : 0;
}
#endif
if (Verbosity()) PrintAddressSpaceLayout();
if (full_shadow_is_available) {
// mmap the low shadow plus at least one page at the left.
if (kLowShadowBeg)
ReserveShadowMemoryRange(shadow_start, kLowShadowEnd, "low shadow");
// mmap the high shadow.
ReserveShadowMemoryRange(kHighShadowBeg, kHighShadowEnd, "high shadow");
// protect the gap.
ProtectGap(kShadowGapBeg, kShadowGapEnd - kShadowGapBeg + 1);
CHECK_EQ(kShadowGapEnd, kHighShadowBeg - 1);
} else if (kMidMemBeg &&
MemoryRangeIsAvailable(shadow_start, kMidMemBeg - 1) &&
MemoryRangeIsAvailable(kMidMemEnd + 1, kHighShadowEnd)) {
CHECK(kLowShadowBeg != kLowShadowEnd);
// mmap the low shadow plus at least one page at the left.
ReserveShadowMemoryRange(shadow_start, kLowShadowEnd, "low shadow");
// mmap the mid shadow.
ReserveShadowMemoryRange(kMidShadowBeg, kMidShadowEnd, "mid shadow");
// mmap the high shadow.
ReserveShadowMemoryRange(kHighShadowBeg, kHighShadowEnd, "high shadow");
// protect the gaps.
ProtectGap(kShadowGapBeg, kShadowGapEnd - kShadowGapBeg + 1);
ProtectGap(kShadowGap2Beg, kShadowGap2End - kShadowGap2Beg + 1);
ProtectGap(kShadowGap3Beg, kShadowGap3End - kShadowGap3Beg + 1);
} else {
// ASan's mappings can usually shadow the entire address space, even with
// maximum ASLR entropy. However:
// - On 32-bit systems, the maximum ASLR entropy (currently up to 16-bits
// == 256MB) is a significant chunk of the address space; reclaiming it
// by disabling ASLR might allow chonky binaries to run.
// - On 64-bit systems, some settings (e.g., for Linux, unlimited stack
// size plus 31+ bits of entropy) can lead to an incompatible layout.
TryReExecWithoutASLR();
Report(
"Shadow memory range interleaves with an existing memory mapping. "
"ASan cannot proceed correctly. ABORTING.\n");
Report("ASan shadow was supposed to be located in the [%p-%p] range.\n",
(void*)shadow_start, (void*)kHighShadowEnd);
MaybeReportLinuxPIEBug();
DumpProcessMap();
Die();
}
}
} // namespace __asan
#endif // !SANITIZER_FUCHSIA
|