1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
|
/*
* Copyright (C) 2024-2025 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
* ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "config.h"
#include <wtf/SequesteredImmortalHeap.h>
#include <wtf/Compiler.h>
#include <wtf/NeverDestroyed.h>
#if USE(PROTECTED_JIT)
#include <bmalloc/pas_scavenger.h>
namespace WTF {
SequesteredImmortalHeap& SequesteredImmortalHeap::instance()
{
// FIXME: this storage is not contained within the sequestered region
static NeverDestroyed<SequesteredImmortalHeap> instance;
return instance.get();
}
void ConcurrentDecommitQueue::decommit()
{
auto lst = acquireExclusiveCopyOfGranuleList();
auto* curr = lst.head();
if (!curr)
return;
// FIXME: this should go to a page-provider rather than the SIH
auto& sih = SequesteredImmortalHeap::instance();
size_t decommitPageCount { 0 };
size_t decommitGranuleCount { 0 };
UNUSED_VARIABLE(decommitPageCount);
UNUSED_VARIABLE(decommitGranuleCount);
do {
auto* next = curr->next();
auto pages = sih.decommitGranule(curr);
dataLogLnIf(verbose,
"ConcurrentDecommitQueue: decommitted granule at (",
RawPointer(curr), ") (", pages, " pages)");
decommitPageCount += pages;
decommitGranuleCount++;
curr = next;
} while (curr);
dataLogLnIf(verbose, "ConcurrentDecommitQueue: decommitted ",
decommitGranuleCount, " granules (", decommitPageCount, " pages)");
}
void SequesteredImmortalHeap::installScavenger()
{
RELEASE_ASSERT(pas_scavenger_try_install_foreign_work_callback(scavenge, 11, nullptr));
}
bool SequesteredImmortalHeap::scavengeImpl(void* /*userdata*/)
{
dataLogLnIf(verbose, "SequesteredImmortalHeap: scavenging");
{
Locker listLocker { m_scavengerLock };
auto bound = m_nextFreeIndex;
ASSERT(bound <= m_allocatorSlots.size());
for (size_t i = 0; i < bound; i++) {
// FIXME: Refactor the SeqImmortalHeap <-> SeqArenaAllocator
// relationship so that we don't have to assume data layouts
// here
auto& queue = *reinterpret_cast<ConcurrentDecommitQueue*>(&m_allocatorSlots[i]);
queue.decommit();
}
}
return false;
}
GranuleHeader* SequesteredImmortalAllocator::addGranule(size_t minSize)
{
size_t granuleSize = std::max(minSize, minGranuleSize);
using AllocationFailureMode = SequesteredImmortalHeap::AllocationFailureMode;
GranuleHeader* granule = SequesteredImmortalHeap::instance().mapGranule<AllocationFailureMode::Assert>(granuleSize);
static_assert(sizeof(GranuleHeader) >= minHeadAlignment);
m_allocHead = reinterpret_cast<uintptr_t>(granule) + sizeof(GranuleHeader);
m_allocBound = reinterpret_cast<uintptr_t>(granule) + granuleSize;
m_granules.push(granule);
static_assert(sizeof(GranuleHeader) >= minHeadAlignment);
dataLogLnIf(verbose,
"SequesteredImmortalAllocator at ", RawPointer(this),
": expanded: granule was (", RawPointer(m_granules.head()->next()),
"), now (", RawPointer(m_granules.head()),
"); allocHead (",
RawPointer(reinterpret_cast<void*>(m_allocHead)),
"), allocBound (",
RawPointer(reinterpret_cast<void*>(m_allocBound)),
")");
return granule;
}
}
#endif // USE(PROTECTED_JIT)
|