1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347
|
/*
* Copyright (C) 2019 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "hidden_api_jni.h"
#include "hidden_api.h"
#if defined(__linux__)
#include <dlfcn.h>
#include <link.h>
#include <mutex>
#include "android-base/logging.h"
#include "android-base/thread_annotations.h"
#include "unwindstack/Regs.h"
#include "unwindstack/RegsGetLocal.h"
#include "unwindstack/Memory.h"
#include "unwindstack/Unwinder.h"
#include "base/bit_utils.h"
#include "base/casts.h"
#include "base/file_utils.h"
#include "base/globals.h"
#include "base/memory_type_table.h"
#include "base/string_view_cpp20.h"
namespace art {
namespace hiddenapi {
namespace {
// The maximum number of frames to back trace through when performing Core Platform API checks of
// native code.
static constexpr size_t kMaxFramesForHiddenApiJniCheck = 3;
static std::mutex gUnwindingMutex;
struct UnwindHelper {
explicit UnwindHelper(size_t max_depth)
: memory_(unwindstack::Memory::CreateProcessMemory(getpid())),
jit_(memory_),
dex_(memory_),
unwinder_(max_depth, &maps_, memory_) {
CHECK(maps_.Parse());
unwinder_.SetJitDebug(&jit_, unwindstack::Regs::CurrentArch());
unwinder_.SetDexFiles(&dex_, unwindstack::Regs::CurrentArch());
unwinder_.SetResolveNames(false);
unwindstack::Elf::SetCachingEnabled(false);
}
unwindstack::Unwinder* Unwinder() { return &unwinder_; }
private:
unwindstack::LocalMaps maps_;
std::shared_ptr<unwindstack::Memory> memory_;
unwindstack::JitDebug jit_;
unwindstack::DexFiles dex_;
unwindstack::Unwinder unwinder_;
};
static UnwindHelper& GetUnwindHelper() {
static UnwindHelper helper(kMaxFramesForHiddenApiJniCheck);
return helper;
}
} // namespace
std::ostream& operator<<(std::ostream& os, SharedObjectKind kind) {
switch (kind) {
case SharedObjectKind::kArtModule:
os << "ART module";
break;
case SharedObjectKind::kOther:
os << "Other";
break;
}
return os;
}
// Class holding Cached ranges of loaded shared objects to facilitate checks of field and method
// resolutions within the Core Platform API for native callers.
class CodeRangeCache final {
public:
static CodeRangeCache& GetSingleton() {
static CodeRangeCache Singleton;
return Singleton;
}
SharedObjectKind GetSharedObjectKind(void* pc) {
uintptr_t address = reinterpret_cast<uintptr_t>(pc);
SharedObjectKind kind;
if (Find(address, &kind)) {
return kind;
}
return SharedObjectKind::kOther;
}
void BuildCache() {
std::lock_guard<std::mutex> guard(mutex_);
DCHECK_EQ(memory_type_table_.Size(), 0u);
art::MemoryTypeTable<SharedObjectKind>::Builder builder;
builder_ = &builder;
libjavacore_loaded_ = false;
libnativehelper_loaded_ = false;
libopenjdk_loaded_ = false;
// Iterate over ELF headers populating table_builder with executable ranges.
dl_iterate_phdr(VisitElfInfo, this);
memory_type_table_ = builder_->Build();
// Check expected libraries loaded when iterating headers.
CHECK(libjavacore_loaded_);
CHECK(libnativehelper_loaded_);
CHECK(libopenjdk_loaded_);
builder_ = nullptr;
}
void SetLibraryPathClassifier(JniLibraryPathClassifier* fc_classifier) {
std::lock_guard<std::mutex> guard(mutex_);
fc_classifier_ = fc_classifier;
}
bool HasLibraryPathClassifier() const {
std::lock_guard<std::mutex> guard(mutex_);
return fc_classifier_ != nullptr;
}
void DropCache() {
const std::lock_guard<std::mutex> guard(mutex_);
memory_type_table_ = {};
}
private:
CodeRangeCache() {}
bool Find(uintptr_t address, SharedObjectKind* kind) const {
std::lock_guard<std::mutex> guard(mutex_);
const art::MemoryTypeRange<SharedObjectKind>* range = memory_type_table_.Lookup(address);
if (range == nullptr) {
return false;
}
*kind = range->Type();
return true;
}
static int VisitElfInfo(struct dl_phdr_info *info, size_t size ATTRIBUTE_UNUSED, void *data)
NO_THREAD_SAFETY_ANALYSIS {
auto cache = reinterpret_cast<CodeRangeCache*>(data);
art::MemoryTypeTable<SharedObjectKind>::Builder* builder = cache->builder_;
for (size_t i = 0u; i < info->dlpi_phnum; ++i) {
const ElfW(Phdr)& phdr = info->dlpi_phdr[i];
if (phdr.p_type != PT_LOAD || ((phdr.p_flags & PF_X) != PF_X)) {
continue; // Skip anything other than code pages
}
uintptr_t start = info->dlpi_addr + phdr.p_vaddr;
const uintptr_t limit = art::RoundUp(start + phdr.p_memsz, art::kPageSize);
SharedObjectKind kind = GetKind(info->dlpi_name);
if (cache->fc_classifier_ != nullptr) {
std::optional<SharedObjectKind> maybe_kind =
cache->fc_classifier_->Classify(info->dlpi_name);
if (maybe_kind.has_value()) {
kind = maybe_kind.value();
}
}
art::MemoryTypeRange<SharedObjectKind> range{start, limit, kind};
if (!builder->Add(range)) {
LOG(WARNING) << "Overlapping/invalid range found in ELF headers: " << range;
}
}
// Update sanity check state.
std::string_view dlpi_name{info->dlpi_name};
if (!cache->libjavacore_loaded_) {
cache->libjavacore_loaded_ = art::EndsWith(dlpi_name, kLibjavacore);
}
if (!cache->libnativehelper_loaded_) {
cache->libnativehelper_loaded_ = art::EndsWith(dlpi_name, kLibnativehelper);
}
if (!cache->libopenjdk_loaded_) {
cache->libopenjdk_loaded_ = art::EndsWith(dlpi_name, kLibopenjdk);
}
return 0;
}
static SharedObjectKind GetKind(const char* so_name) {
return art::LocationIsOnArtModule(so_name) ? SharedObjectKind::kArtModule
: SharedObjectKind::kOther;
}
// Table builder, only valid during BuildCache().
art::MemoryTypeTable<SharedObjectKind>::Builder* builder_ GUARDED_BY(mutex_) = nullptr;
// Table for mapping PC addresses to their shared object files.
art::MemoryTypeTable<SharedObjectKind> memory_type_table_ GUARDED_BY(mutex_);
// Classifier used to override shared object classifications during tests.
JniLibraryPathClassifier* fc_classifier_ GUARDED_BY(mutex_) = nullptr;
// Sanity checking state.
bool libjavacore_loaded_;
bool libnativehelper_loaded_;
bool libopenjdk_loaded_;
// Mutex to protect fc_classifier_ and related state during testing. Outside of testing we
// only generate the |memory_type_table_| once.
mutable std::mutex mutex_;
static constexpr std::string_view kLibjavacore = "libjavacore.so";
static constexpr std::string_view kLibnativehelper = "libnativehelper.so";
static constexpr std::string_view kLibopenjdk = art::kIsDebugBuild ? "libopenjdkd.so"
: "libopenjdk.so";
DISALLOW_COPY_AND_ASSIGN(CodeRangeCache);
};
// Cookie for tracking approvals of Core Platform API use. The Thread class has a per thread field
// that stores these values. This is necessary because we can't change the JNI interfaces and some
// paths call into each other, ie checked JNI typically calls plain JNI.
struct CorePlatformApiCookie final {
bool approved:1; // Whether the outermost ScopedCorePlatformApiCheck instance is approved.
uint32_t depth:31; // Count of nested ScopedCorePlatformApiCheck instances.
};
ScopedCorePlatformApiCheck::ScopedCorePlatformApiCheck() {
Thread* self = Thread::Current();
CorePlatformApiCookie cookie =
bit_cast<CorePlatformApiCookie, uint32_t>(self->CorePlatformApiCookie());
bool is_core_platform_api_approved = false; // Default value for non-device testing.
const bool is_under_test = CodeRangeCache::GetSingleton().HasLibraryPathClassifier();
if (kIsTargetBuild || is_under_test) {
// On target device (or running tests). If policy says enforcement is disabled,
// then treat all callers as approved.
auto policy = Runtime::Current()->GetCorePlatformApiEnforcementPolicy();
if (policy == hiddenapi::EnforcementPolicy::kDisabled) {
is_core_platform_api_approved = true;
} else if (cookie.depth == 0) {
// On target device, only check the caller at depth 0 which corresponds to the outermost
// entry into the JNI interface. When performing the check here, we note that |*this| is
// stack allocated at entry points to JNI field and method resolution |*methods. We can use
// the address of |this| to find the callers frame.
DCHECK_EQ(cookie.approved, false);
void* caller_pc = nullptr;
{
std::lock_guard<std::mutex> guard(gUnwindingMutex);
unwindstack::Unwinder* unwinder = GetUnwindHelper().Unwinder();
std::unique_ptr<unwindstack::Regs> regs(unwindstack::Regs::CreateFromLocal());
RegsGetLocal(regs.get());
unwinder->SetRegs(regs.get());
unwinder->Unwind();
for (auto it = unwinder->frames().begin(); it != unwinder->frames().end(); ++it) {
// Unwind to frame above the tlsJniStackMarker. The stack markers should be on the first
// frame calling JNI methods.
if (it->sp > reinterpret_cast<uint64_t>(this)) {
caller_pc = reinterpret_cast<void*>(it->pc);
break;
}
}
}
if (caller_pc != nullptr) {
SharedObjectKind kind = CodeRangeCache::GetSingleton().GetSharedObjectKind(caller_pc);
is_core_platform_api_approved = (kind == SharedObjectKind::kArtModule);
}
}
}
// Update cookie
if (is_core_platform_api_approved) {
cookie.approved = true;
}
cookie.depth += 1;
self->SetCorePlatformApiCookie(bit_cast<uint32_t, CorePlatformApiCookie>(cookie));
}
ScopedCorePlatformApiCheck::~ScopedCorePlatformApiCheck() {
Thread* self = Thread::Current();
// Update cookie, decrementing depth and clearing approved flag if this is the outermost
// instance.
CorePlatformApiCookie cookie =
bit_cast<CorePlatformApiCookie, uint32_t>(self->CorePlatformApiCookie());
DCHECK_NE(cookie.depth, 0u);
cookie.depth -= 1u;
if (cookie.depth == 0u) {
cookie.approved = false;
}
self->SetCorePlatformApiCookie(bit_cast<uint32_t, CorePlatformApiCookie>(cookie));
}
bool ScopedCorePlatformApiCheck::IsCurrentCallerApproved(Thread* self) {
CorePlatformApiCookie cookie =
bit_cast<CorePlatformApiCookie, uint32_t>(self->CorePlatformApiCookie());
DCHECK_GT(cookie.depth, 0u);
return cookie.approved;
}
void JniInitializeNativeCallerCheck(JniLibraryPathClassifier* classifier) {
// This method should be called only once and before there are multiple runtime threads.
CodeRangeCache::GetSingleton().DropCache();
CodeRangeCache::GetSingleton().SetLibraryPathClassifier(classifier);
CodeRangeCache::GetSingleton().BuildCache();
}
void JniShutdownNativeCallerCheck() {
CodeRangeCache::GetSingleton().SetLibraryPathClassifier(nullptr);
CodeRangeCache::GetSingleton().DropCache();
}
} // namespace hiddenapi
} // namespace art
#else // __linux__
namespace art {
namespace hiddenapi {
ScopedCorePlatformApiCheck::ScopedCorePlatformApiCheck() {}
ScopedCorePlatformApiCheck::~ScopedCorePlatformApiCheck() {}
bool ScopedCorePlatformApiCheck::IsCurrentCallerApproved(Thread* self ATTRIBUTE_UNUSED) {
return false;
}
void JniInitializeNativeCallerCheck(JniLibraryPathClassifier* f ATTRIBUTE_UNUSED) {}
void JniShutdownNativeCallerCheck() {}
} // namespace hiddenapi
} // namespace art
#endif // __linux__
|