1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
|
/*
* Copyright (C) 2018-2025 Intel Corporation
*
* SPDX-License-Identifier: MIT
*
*/
#pragma once
#include "shared/source/helpers/constants.h"
#include <cstdint>
#include <mutex>
#include <vector>
namespace NEO {
struct HeapChunk {
HeapChunk(uint64_t ptr, size_t size) : ptr(ptr), size(size) {}
uint64_t ptr;
size_t size;
};
bool operator<(const HeapChunk &hc1, const HeapChunk &hc2);
class HeapAllocator {
public:
HeapAllocator(uint64_t address, uint64_t size) : HeapAllocator(address, size, MemoryConstants::pageSize) {
}
HeapAllocator(uint64_t address, uint64_t size, size_t allocationAlignment) : HeapAllocator(address, size, allocationAlignment, 4 * MemoryConstants::megaByte) {
}
HeapAllocator(uint64_t address, uint64_t size, size_t allocationAlignment, size_t threshold) : baseAddress(address), size(size), availableSize(size), allocationAlignment(allocationAlignment), sizeThreshold(threshold) {
pLeftBound = address;
pRightBound = address + size;
freedChunksBig.reserve(10);
freedChunksSmall.reserve(50);
}
MOCKABLE_VIRTUAL ~HeapAllocator() = default;
uint64_t allocate(size_t &sizeToAllocate) {
return allocateWithCustomAlignment(sizeToAllocate, 0u);
}
uint64_t allocateWithCustomAlignment(size_t &sizeToAllocate, size_t alignment);
MOCKABLE_VIRTUAL void free(uint64_t ptr, size_t size);
uint64_t getLeftSize() const {
return availableSize;
}
uint64_t getUsedSize() const {
return size - availableSize;
}
double getUsage() const;
uint64_t getBaseAddress() const {
return this->baseAddress;
}
protected:
const uint64_t baseAddress;
const uint64_t size;
uint64_t availableSize;
uint64_t pLeftBound;
uint64_t pRightBound;
size_t allocationAlignment;
const size_t sizeThreshold;
std::vector<HeapChunk> freedChunksSmall;
std::vector<HeapChunk> freedChunksBig;
std::mutex mtx;
uint64_t getFromFreedChunks(size_t size, std::vector<HeapChunk> &freedChunks, size_t &sizeOfFreedChunk, size_t requiredAlignment);
void storeInFreedChunks(uint64_t ptr, size_t size, std::vector<HeapChunk> &freedChunks) {
for (auto &freedChunk : freedChunks) {
if (freedChunk.ptr == ptr + size) {
freedChunk.ptr = ptr;
freedChunk.size += size;
return;
}
if (freedChunk.ptr + freedChunk.size == ptr) {
freedChunk.size += size;
return;
}
}
freedChunks.emplace_back(ptr, size);
}
void mergeLastFreedSmall() {
size_t maxSizeOfSmallChunks = freedChunksSmall.size();
if (maxSizeOfSmallChunks > 0) {
auto ptr = freedChunksSmall[maxSizeOfSmallChunks - 1].ptr;
size_t chunkSize = freedChunksSmall[maxSizeOfSmallChunks - 1].size;
if (ptr == pRightBound) {
pRightBound = ptr + chunkSize;
freedChunksSmall.pop_back();
}
}
}
void mergeLastFreedBig() {
size_t maxSizeOfBigChunks = freedChunksBig.size();
if (maxSizeOfBigChunks > 0) {
auto ptr = freedChunksBig[maxSizeOfBigChunks - 1].ptr;
size_t chunkSize = freedChunksBig[maxSizeOfBigChunks - 1].size;
if (ptr == pLeftBound - chunkSize) {
pLeftBound = ptr;
freedChunksBig.pop_back();
}
}
}
void defragment();
};
} // namespace NEO
|