1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136
|
//===-- secondary.cc --------------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "secondary.h"
#include "string_utils.h"
namespace scudo {
// As with the Primary, the size passed to this function includes any desired
// alignment, so that the frontend can align the user allocation. The hint
// parameter allows us to unmap spurious memory when dealing with larger
// (greater than a page) alignments on 32-bit platforms.
// Due to the sparsity of address space available on those platforms, requesting
// an allocation from the Secondary with a large alignment would end up wasting
// VA space (even though we are not committing the whole thing), hence the need
// to trim off some of the reserved space.
// For allocations requested with an alignment greater than or equal to a page,
// the committed memory will amount to something close to Size - AlignmentHint
// (pending rounding and headers).
void *MapAllocator::allocate(uptr Size, uptr AlignmentHint, uptr *BlockEnd) {
DCHECK_GT(Size, AlignmentHint);
const uptr PageSize = getPageSizeCached();
const uptr MapSize =
roundUpTo(Size + LargeBlock::getHeaderSize(), PageSize) + 2 * PageSize;
MapPlatformData Data = {};
uptr MapBase =
reinterpret_cast<uptr>(map(nullptr, MapSize, "scudo:secondary",
MAP_NOACCESS | MAP_ALLOWNOMEM, &Data));
if (!MapBase)
return nullptr;
uptr CommitBase = MapBase + PageSize;
uptr MapEnd = MapBase + MapSize;
// In the unlikely event of alignments larger than a page, adjust the amount
// of memory we want to commit, and trim the extra memory.
if (AlignmentHint >= PageSize) {
// For alignments greater than or equal to a page, the user pointer (eg: the
// pointer that is returned by the C or C++ allocation APIs) ends up on a
// page boundary , and our headers will live in the preceding page.
CommitBase = roundUpTo(MapBase + PageSize + 1, AlignmentHint) - PageSize;
const uptr NewMapBase = CommitBase - PageSize;
DCHECK_GE(NewMapBase, MapBase);
// We only trim the extra memory on 32-bit platforms: 64-bit platforms
// are less constrained memory wise, and that saves us two syscalls.
if (SCUDO_WORDSIZE == 32U && NewMapBase != MapBase) {
unmap(reinterpret_cast<void *>(MapBase), NewMapBase - MapBase, 0, &Data);
MapBase = NewMapBase;
}
const uptr NewMapEnd = CommitBase + PageSize +
roundUpTo((Size - AlignmentHint), PageSize) +
PageSize;
DCHECK_LE(NewMapEnd, MapEnd);
if (SCUDO_WORDSIZE == 32U && NewMapEnd != MapEnd) {
unmap(reinterpret_cast<void *>(NewMapEnd), MapEnd - NewMapEnd, 0, &Data);
MapEnd = NewMapEnd;
}
}
const uptr CommitSize = MapEnd - PageSize - CommitBase;
const uptr Ptr =
reinterpret_cast<uptr>(map(reinterpret_cast<void *>(CommitBase),
CommitSize, "scudo:secondary", 0, &Data));
LargeBlock::Header *H = reinterpret_cast<LargeBlock::Header *>(Ptr);
H->MapBase = MapBase;
H->MapSize = MapEnd - MapBase;
H->BlockEnd = CommitBase + CommitSize;
H->Data = Data;
{
ScopedLock L(Mutex);
if (!Tail) {
Tail = H;
} else {
Tail->Next = H;
H->Prev = Tail;
Tail = H;
}
AllocatedBytes += CommitSize;
if (LargestSize < CommitSize)
LargestSize = CommitSize;
NumberOfAllocs++;
Stats.add(StatAllocated, CommitSize);
Stats.add(StatMapped, H->MapSize);
}
if (BlockEnd)
*BlockEnd = CommitBase + CommitSize;
return reinterpret_cast<void *>(Ptr + LargeBlock::getHeaderSize());
}
void MapAllocator::deallocate(void *Ptr) {
LargeBlock::Header *H = LargeBlock::getHeader(Ptr);
{
ScopedLock L(Mutex);
LargeBlock::Header *Prev = H->Prev;
LargeBlock::Header *Next = H->Next;
if (Prev) {
CHECK_EQ(Prev->Next, H);
Prev->Next = Next;
}
if (Next) {
CHECK_EQ(Next->Prev, H);
Next->Prev = Prev;
}
if (Tail == H) {
CHECK(!Next);
Tail = Prev;
} else {
CHECK(Next);
}
const uptr CommitSize = H->BlockEnd - reinterpret_cast<uptr>(H);
FreedBytes += CommitSize;
NumberOfFrees++;
Stats.sub(StatAllocated, CommitSize);
Stats.sub(StatMapped, H->MapSize);
}
void *Addr = reinterpret_cast<void *>(H->MapBase);
const uptr Size = H->MapSize;
MapPlatformData Data;
Data = H->Data;
unmap(Addr, Size, UNMAP_ALL, &Data);
}
void MapAllocator::printStats() const {
Printf("Stats: MapAllocator: allocated %zd times (%zdK), freed %zd times "
"(%zdK), remains %zd (%zdK) max %zdM\n",
NumberOfAllocs, AllocatedBytes >> 10, NumberOfFrees, FreedBytes >> 10,
NumberOfAllocs - NumberOfFrees, (AllocatedBytes - FreedBytes) >> 10,
LargestSize >> 20);
}
} // namespace scudo
|