1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029
|
/*
* Copyright (C) 2008-2024 Apple Inc. All rights reserved.
* Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of Apple Inc. ("Apple") nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#pragma once
#include "ArrayProfile.h"
#include "BytecodeConventions.h"
#include "CallLinkInfo.h"
#include "CodeBlockHash.h"
#include "CodeOrigin.h"
#include "CodeType.h"
#include "CompilationResult.h"
#include "ConcurrentJSLock.h"
#include "DFGCodeOriginPool.h"
#include "DFGCommon.h"
#include "DirectEvalCodeCache.h"
#include "EvalExecutable.h"
#include "ExecutionCounter.h"
#include "ExpressionInfo.h"
#include "FunctionExecutable.h"
#include "HandlerInfo.h"
#include "ICStatusMap.h"
#include "Instruction.h"
#include "InstructionStream.h"
#include "JITCode.h"
#include "JITCodeMap.h"
#include "JITMathICForwards.h"
#include "JSCast.h"
#include "JumpTable.h"
#include "LazyValueProfile.h"
#include "MetadataTable.h"
#include "ModuleProgramExecutable.h"
#include "ObjectAllocationProfile.h"
#include "Options.h"
#include "Printer.h"
#include "ProfilerJettisonReason.h"
#include "ProgramExecutable.h"
#include "PutPropertySlot.h"
#include "RegisterAtOffsetList.h"
#include "ValueProfile.h"
#include "VirtualRegister.h"
#include "Watchpoint.h"
#include <wtf/ApproximateTime.h>
#include <wtf/FastMalloc.h>
#include <wtf/FixedVector.h>
#include <wtf/HashSet.h>
#include <wtf/RefPtr.h>
#include <wtf/SegmentedVector.h>
#include <wtf/Vector.h>
#include <wtf/text/WTFString.h>
WTF_ALLOW_UNSAFE_BUFFER_USAGE_BEGIN
namespace JSC {
#if ENABLE(DFG_JIT)
namespace DFG {
class JITData;
} // namespace DFG
#endif
class UnaryArithProfile;
class BinaryArithProfile;
class BytecodeLivenessAnalysis;
class CodeBlockSet;
class JSModuleEnvironment;
class LLIntOffsetsExtractor;
class LLIntPrototypeLoadAdaptiveStructureWatchpoint;
class MetadataTable;
class RegisterAtOffsetList;
class ScriptExecutable;
class StructureStubInfo;
class BaselineJITCode;
class BaselineJITData;
DECLARE_ALLOCATOR_WITH_HEAP_IDENTIFIER(CodeBlockRareData);
enum class AccessType : int8_t;
struct OpCatch;
enum ReoptimizationMode { DontCountReoptimization, CountReoptimization };
class CodeBlock : public JSCell {
typedef JSCell Base;
friend class BytecodeLivenessAnalysis;
friend class JIT;
friend class LLIntOffsetsExtractor;
public:
enum CopyParsedBlockTag { CopyParsedBlock };
static constexpr unsigned StructureFlags = Base::StructureFlags | StructureIsImmortal;
static constexpr DestructionMode needsDestruction = NeedsDestruction;
template<typename, SubspaceAccess>
static void subspaceFor(VM&)
{
RELEASE_ASSERT_NOT_REACHED();
}
// GC strongly assumes CodeBlock is not a PreciseAllocation for now.
static constexpr uint8_t numberOfLowerTierPreciseCells = 0;
DECLARE_INFO;
protected:
CodeBlock(VM&, Structure*, CopyParsedBlockTag, CodeBlock& other);
CodeBlock(VM&, Structure*, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock*, JSScope*);
void finishCreation(VM&, CopyParsedBlockTag, CodeBlock& other);
bool finishCreation(VM&, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock*, JSScope*);
WriteBarrier<JSGlobalObject> m_globalObject;
public:
JS_EXPORT_PRIVATE ~CodeBlock();
UnlinkedCodeBlock* unlinkedCodeBlock() const { return m_unlinkedCode.get(); }
CString inferredName() const;
CodeBlockHash hash() const;
bool hasHash() const;
bool isSafeToComputeHash() const;
CString hashAsStringIfPossible() const;
CString sourceCodeForTools() const;
CString sourceCodeOnOneLine() const; // As sourceCodeForTools(), but replaces all whitespace runs with a single space.
void dumpAssumingJITType(PrintStream&, JITType) const;
JS_EXPORT_PRIVATE void dump(PrintStream&) const;
void dumpSimpleName(PrintStream&) const;
MetadataTable* metadataTable() const { return m_metadata.get(); }
unsigned numParameters() const { return m_numParameters; }
private:
void setNumParameters(unsigned newValue, bool allocateArgumentValueProfiles);
public:
bool couldBeTainted() const { return m_couldBeTainted; }
unsigned numberOfArgumentsToSkip() const { return m_numberOfArgumentsToSkip; }
unsigned numCalleeLocals() const { return m_numCalleeLocals; }
unsigned numVars() const { return m_numVars; }
unsigned numTmps() const { return m_unlinkedCode->hasCheckpoints() * maxNumCheckpointTmps; }
static constexpr ptrdiff_t offsetOfNumParameters() { return OBJECT_OFFSETOF(CodeBlock, m_numParameters); }
static constexpr ptrdiff_t offsetOfVM() { return OBJECT_OFFSETOF(CodeBlock, m_vm); }
CodeBlock* alternative() const { return static_cast<CodeBlock*>(m_alternative.get()); }
void setAlternative(VM&, CodeBlock*);
template <typename Functor> void forEachRelatedCodeBlock(Functor&& functor)
{
Functor f(std::forward<Functor>(functor));
Vector<CodeBlock*, 4> codeBlocks;
codeBlocks.append(this);
while (!codeBlocks.isEmpty()) {
CodeBlock* currentCodeBlock = codeBlocks.takeLast();
f(currentCodeBlock);
if (CodeBlock* alternative = currentCodeBlock->alternative())
codeBlocks.append(alternative);
if (CodeBlock* osrEntryBlock = currentCodeBlock->specialOSREntryBlockOrNull())
codeBlocks.append(osrEntryBlock);
}
}
CodeSpecializationKind specializationKind() const
{
return specializationFromIsConstruct(isConstructor());
}
CodeBlock* alternativeForJettison();
JS_EXPORT_PRIVATE CodeBlock* baselineAlternative();
// FIXME: Get rid of this.
// https://bugs.webkit.org/show_bug.cgi?id=123677
CodeBlock* baselineVersion();
DECLARE_VISIT_CHILDREN;
static size_t estimatedSize(JSCell*, VM&);
static void destroy(JSCell*);
void finalizeUnconditionally(VM&, CollectionScope);
void notifyLexicalBindingUpdate();
void dumpSource();
void dumpSource(PrintStream&);
void dumpBytecode();
void dumpBytecode(PrintStream&);
void dumpBytecode(PrintStream& out, const JSInstructionStream::Ref& it, const ICStatusMap& = ICStatusMap());
void dumpBytecode(PrintStream& out, unsigned bytecodeOffset, const ICStatusMap& = ICStatusMap());
void dumpExceptionHandlers(PrintStream&);
void printStructures(PrintStream&, const JSInstruction*);
void printStructure(PrintStream&, const char* name, const JSInstruction*, int operand);
void dumpMathICStats();
bool isConstructor() const { return m_unlinkedCode->isConstructor(); }
CodeType codeType() const { return m_unlinkedCode->codeType(); }
JSParserScriptMode scriptMode() const { return m_unlinkedCode->scriptMode(); }
bool hasInstalledVMTrapsBreakpoints() const;
bool canInstallVMTrapBreakpoints() const;
bool installVMTrapBreakpoints();
ALWAYS_INLINE bool isTemporaryRegister(VirtualRegister reg)
{
return reg.offset() >= static_cast<int>(m_numVars);
}
HandlerInfo* handlerForBytecodeIndex(BytecodeIndex, RequiredHandler = RequiredHandler::AnyHandler);
HandlerInfo* handlerForIndex(unsigned, RequiredHandler = RequiredHandler::AnyHandler);
void removeExceptionHandlerForCallSite(DisposableCallSiteIndex);
LineColumn lineColumnForBytecodeIndex(BytecodeIndex) const;
ExpressionInfo::Entry expressionInfoForBytecodeIndex(BytecodeIndex) const;
std::optional<BytecodeIndex> bytecodeIndexFromCallSiteIndex(CallSiteIndex);
// Because we might throw out baseline JIT code and all its baseline JIT data (m_jitData),
// you need to be careful about the lifetime of when you use the return value of this function.
// The return value may have raw pointers into this data structure that gets thrown away.
// Specifically, you need to ensure that no GC can be finalized (typically that means no
// allocations) between calling this and the last use of it.
void getICStatusMap(const ConcurrentJSLocker&, ICStatusMap& result);
void getICStatusMap(ICStatusMap& result);
#if ENABLE(JIT)
void setupWithUnlinkedBaselineCode(Ref<BaselineJITCode>);
static constexpr ptrdiff_t offsetOfJITData() { return OBJECT_OFFSETOF(CodeBlock, m_jitData); }
// O(n) operation. Use getICStatusMap() unless you really only intend to get one stub info.
StructureStubInfo* findStubInfo(CodeOrigin);
const JITCodeMap& jitCodeMap();
std::optional<CodeOrigin> findPC(void* pc);
#endif // ENABLE(JIT)
void unlinkOrUpgradeIncomingCalls(VM&, CodeBlock*);
void linkIncomingCall(JSCell* caller, CallLinkInfoBase*);
const JSInstruction* outOfLineJumpTarget(const JSInstruction* pc);
int outOfLineJumpOffset(JSInstructionStream::Offset offset)
{
return m_unlinkedCode->outOfLineJumpOffset(offset);
}
int outOfLineJumpOffset(const JSInstruction* pc);
int outOfLineJumpOffset(const JSInstructionStream::Ref& instruction)
{
return outOfLineJumpOffset(instruction.ptr());
}
inline unsigned bytecodeOffset(const JSInstruction* returnAddress)
{
const auto* instructionsBegin = instructions().at(0).ptr();
const auto* instructionsEnd = reinterpret_cast<const JSInstruction*>(reinterpret_cast<uintptr_t>(instructionsBegin) + instructions().size());
RELEASE_ASSERT(returnAddress >= instructionsBegin && returnAddress < instructionsEnd);
return returnAddress - instructionsBegin;
}
inline BytecodeIndex bytecodeIndex(const JSInstruction* returnAddress)
{
return BytecodeIndex(bytecodeOffset(returnAddress));
}
const JSInstructionStream& instructions() const { return m_unlinkedCode->instructions(); }
const JSInstruction* instructionAt(BytecodeIndex index) const { return instructions().at(index).ptr(); }
size_t predictedMachineCodeSize();
unsigned instructionsSize() const { return instructions().size(); }
unsigned bytecodeCost() const;
// Exactly equivalent to codeBlock->ownerExecutable()->newReplacementCodeBlockFor(codeBlock->specializationKind())
CodeBlock* newReplacement();
void setJITCode(Ref<JSC::JITCode>&& code)
{
if (!code->isShared())
heap()->reportExtraMemoryAllocated(this, code->size());
ConcurrentJSLocker locker(m_lock);
WTF::storeStoreFence(); // This is probably not needed because the lock will also do something similar, but it's good to be paranoid.
m_jitCode = WTFMove(code);
}
RefPtr<JSC::JITCode> jitCode() { return m_jitCode; }
static constexpr ptrdiff_t jitCodeOffset() { return OBJECT_OFFSETOF(CodeBlock, m_jitCode); }
JITType jitType() const
{
auto* jitCode = m_jitCode.get();
JITType result = JSC::JITCode::jitTypeFor(jitCode);
return result;
}
bool hasBaselineJITProfiling() const
{
return jitType() == JITType::BaselineJIT;
}
bool useDataIC() const;
CodePtr<JSEntryPtrTag> addressForCallConcurrently(const ConcurrentJSLocker&, ArityCheckMode) const;
#if ENABLE(JIT)
CodeBlock* replacement();
DFG::CapabilityLevel computeCapabilityLevel();
DFG::CapabilityLevel capabilityLevel();
DFG::CapabilityLevel capabilityLevelState() { return static_cast<DFG::CapabilityLevel>(m_capabilityLevelState); }
CodeBlock* optimizedReplacement(JITType typeToReplace);
CodeBlock* optimizedReplacement(); // the typeToReplace is my JITType
bool hasOptimizedReplacement(JITType typeToReplace);
bool hasOptimizedReplacement(); // the typeToReplace is my JITType
#endif
void jettison(Profiler::JettisonReason, ReoptimizationMode = DontCountReoptimization, const FireDetail* = nullptr);
ScriptExecutable* ownerExecutable() const { return m_ownerExecutable.get(); }
VM& vm() const { return *m_vm; }
VirtualRegister thisRegister() const { return m_unlinkedCode->thisRegister(); }
void setScopeRegister(VirtualRegister scopeRegister)
{
ASSERT(scopeRegister.isLocal() || !scopeRegister.isValid());
m_scopeRegister = scopeRegister;
}
VirtualRegister scopeRegister() const
{
return m_scopeRegister;
}
PutPropertySlot::Context putByIdContext() const
{
if (codeType() == EvalCode)
return PutPropertySlot::PutByIdEval;
return PutPropertySlot::PutById;
}
const SourceCode& source() const { return m_ownerExecutable->source(); }
unsigned sourceOffset() const { return m_ownerExecutable->source().startOffset(); }
unsigned firstLineColumnOffset() const { return m_ownerExecutable->startColumn(); }
size_t numberOfJumpTargets() const { return m_unlinkedCode->numberOfJumpTargets(); }
unsigned jumpTarget(int index) const { return m_unlinkedCode->jumpTarget(index); }
String nameForRegister(VirtualRegister);
static constexpr ptrdiff_t offsetOfArgumentValueProfiles() { return OBJECT_OFFSETOF(CodeBlock, m_argumentValueProfiles); }
unsigned numberOfArgumentValueProfiles()
{
ASSERT(m_argumentValueProfiles.size() == static_cast<unsigned>(m_numParameters) || !Options::useJIT() || !JITCode::isBaselineCode(jitType()));
return m_argumentValueProfiles.size();
}
ArgumentValueProfile& valueProfileForArgument(unsigned argumentIndex)
{
ASSERT(Options::useJIT()); // This is only called from the various JIT compilers or places that first check numberOfArgumentValueProfiles before calling this.
ASSERT(JITCode::isBaselineCode(jitType()));
return m_argumentValueProfiles[argumentIndex];
}
FixedVector<ArgumentValueProfile>& argumentValueProfiles() { return m_argumentValueProfiles; }
ValueProfile& valueProfileForOffset(unsigned profileOffset) { return m_metadata->valueProfileForOffset(profileOffset); }
ValueProfile* tryGetValueProfileForBytecodeIndex(BytecodeIndex);
ValueProfile& valueProfileForBytecodeIndex(BytecodeIndex);
SpeculatedType valueProfilePredictionForBytecodeIndex(const ConcurrentJSLocker&, BytecodeIndex, JSValue* specFailValue = nullptr);
template<typename Functor> void forEachValueProfile(const Functor&);
template<typename Functor> void forEachArrayAllocationProfile(const Functor&);
template<typename Functor> void forEachObjectAllocationProfile(const Functor&);
template<typename Functor> void forEachLLIntOrBaselineCallLinkInfo(const Functor&);
BinaryArithProfile* binaryArithProfileForBytecodeIndex(BytecodeIndex);
UnaryArithProfile* unaryArithProfileForBytecodeIndex(BytecodeIndex);
BinaryArithProfile* binaryArithProfileForPC(const JSInstruction*);
UnaryArithProfile* unaryArithProfileForPC(const JSInstruction*);
bool couldTakeSpecialArithFastCase(BytecodeIndex bytecodeOffset);
ArrayProfile* getArrayProfile(const ConcurrentJSLocker&, BytecodeIndex);
// Exception handling support
size_t numberOfExceptionHandlers() const { return m_rareData ? m_rareData->m_exceptionHandlers.size() : 0; }
HandlerInfo& exceptionHandler(int index) { RELEASE_ASSERT(m_rareData); return m_rareData->m_exceptionHandlers[index]; }
bool hasExpressionInfo() { return m_unlinkedCode->hasExpressionInfo(); }
#if ENABLE(DFG_JIT)
DFG::CodeOriginPool& codeOrigins();
// Having code origins implies that there has been some inlining.
bool hasCodeOrigins()
{
return JSC::JITCode::isOptimizingJIT(jitType());
}
bool canGetCodeOrigin(CallSiteIndex index)
{
if (!hasCodeOrigins())
return false;
return index.bits() < codeOrigins().size();
}
CodeOrigin codeOrigin(CallSiteIndex index)
{
return codeOrigins().get(index.bits());
}
CompressedLazyValueProfileHolder& lazyValueProfiles()
{
return m_lazyValueProfiles;
}
#endif // ENABLE(DFG_JIT)
// Constant Pool
#if ENABLE(DFG_JIT)
size_t numberOfIdentifiers() const { return m_unlinkedCode->numberOfIdentifiers() + numberOfDFGIdentifiers(); }
size_t numberOfDFGIdentifiers() const;
const Identifier& identifier(int index) const;
#else
size_t numberOfIdentifiers() const { return m_unlinkedCode->numberOfIdentifiers(); }
const Identifier& identifier(int index) const { return m_unlinkedCode->identifier(index); }
#endif
#if ASSERT_ENABLED
bool hasIdentifier(UniquedStringImpl*);
bool wasDestructed();
#endif
Vector<WriteBarrier<Unknown>>& constants() { return m_constantRegisters; }
unsigned addConstant(const ConcurrentJSLocker&, JSValue v)
{
unsigned result = m_constantRegisters.size();
m_constantRegisters.append(WriteBarrier<Unknown>());
m_constantRegisters.last().set(*m_vm, this, v);
return result;
}
unsigned addConstantLazily(const ConcurrentJSLocker&)
{
unsigned result = m_constantRegisters.size();
m_constantRegisters.append(WriteBarrier<Unknown>());
return result;
}
const Vector<WriteBarrier<Unknown>>& constantRegisters() { return m_constantRegisters; }
WriteBarrier<Unknown>& constantRegister(VirtualRegister reg) { return m_constantRegisters[reg.toConstantIndex()]; }
ALWAYS_INLINE JSValue getConstant(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()].get(); }
bool isConstantOwnedByUnlinkedCodeBlock(VirtualRegister) const;
ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(VirtualRegister reg) const { return m_unlinkedCode->constantSourceCodeRepresentation(reg); }
ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(unsigned index) const { return m_unlinkedCode->constantSourceCodeRepresentation(index); }
static constexpr ptrdiff_t offsetOfConstantsVectorBuffer() { return OBJECT_OFFSETOF(CodeBlock, m_constantRegisters) + decltype(m_constantRegisters)::dataMemoryOffset(); }
FunctionExecutable* functionDecl(int index) { return m_functionDecls[index].get(); }
int numberOfFunctionDecls() { return m_functionDecls.size(); }
FunctionExecutable* functionExpr(int index) { return m_functionExprs[index].get(); }
const BitVector& bitVector(size_t i) { return m_unlinkedCode->bitVector(i); }
JSC::Heap* heap() const { return &m_vm->heap; }
JSGlobalObject* globalObject() { return m_globalObject.get(); }
static constexpr ptrdiff_t offsetOfGlobalObject() { return OBJECT_OFFSETOF(CodeBlock, m_globalObject); }
JSGlobalObject* globalObjectFor(CodeOrigin);
BytecodeLivenessAnalysis& livenessAnalysis()
{
return m_unlinkedCode->livenessAnalysis(this);
}
void validate();
// Jump Tables
#if ENABLE(JIT)
SimpleJumpTable& baselineSwitchJumpTable(int tableIndex);
StringJumpTable& baselineStringSwitchJumpTable(int tableIndex);
void setBaselineJITData(std::unique_ptr<BaselineJITData>&& jitData)
{
ASSERT(!m_jitData);
WTF::storeStoreFence(); // m_jitData is accessed from concurrent GC threads.
m_jitData = jitData.release();
}
BaselineJITData* baselineJITData()
{
if (!JSC::JITCode::isOptimizingJIT(jitType()))
return std::bit_cast<BaselineJITData*>(m_jitData);
return nullptr;
}
#if ENABLE(DFG_JIT)
void setDFGJITData(std::unique_ptr<DFG::JITData>&& jitData)
{
ASSERT(!m_jitData);
WTF::storeStoreFence(); // m_jitData is accessed from concurrent GC threads.
m_jitData = jitData.release();
}
DFG::JITData* dfgJITData()
{
if (JSC::JITCode::isOptimizingJIT(jitType()))
return std::bit_cast<DFG::JITData*>(m_jitData);
return nullptr;
}
#endif
#endif
size_t numberOfUnlinkedSwitchJumpTables() const { return m_unlinkedCode->numberOfUnlinkedSwitchJumpTables(); }
const UnlinkedSimpleJumpTable& unlinkedSwitchJumpTable(int tableIndex) { return m_unlinkedCode->unlinkedSwitchJumpTable(tableIndex); }
#if ENABLE(DFG_JIT)
StringJumpTable& dfgStringSwitchJumpTable(int tableIndex);
SimpleJumpTable& dfgSwitchJumpTable(int tableIndex);
#endif
size_t numberOfUnlinkedStringSwitchJumpTables() const { return m_unlinkedCode->numberOfUnlinkedStringSwitchJumpTables(); }
const UnlinkedStringJumpTable& unlinkedStringSwitchJumpTable(int tableIndex) { return m_unlinkedCode->unlinkedStringSwitchJumpTable(tableIndex); }
DirectEvalCodeCache& directEvalCodeCache() { createRareDataIfNecessary(); return m_rareData->m_directEvalCodeCache; }
enum class ShrinkMode {
// Shrink prior to generating machine code that may point directly into vectors.
EarlyShrink,
// Shrink after generating machine code, and after possibly creating new vectors
// and appending to others. At this time it is not safe to shrink certain vectors
// because we would have generated machine code that references them directly.
LateShrink,
};
void shrinkToFit(const ConcurrentJSLocker&, ShrinkMode);
// Functions for controlling when JITting kicks in, in a mixed mode
// execution world.
bool checkIfJITThresholdReached()
{
return m_unlinkedCode->llintExecuteCounter().checkIfThresholdCrossedAndSet(this);
}
void dontJITAnytimeSoon()
{
m_unlinkedCode->llintExecuteCounter().deferIndefinitely();
}
void jitSoon();
void jitNextInvocation();
const BaselineExecutionCounter& llintExecuteCounter() const
{
return m_unlinkedCode->llintExecuteCounter();
}
typedef UncheckedKeyHashMap<std::tuple<StructureID, BytecodeIndex>, FixedVector<LLIntPrototypeLoadAdaptiveStructureWatchpoint>> StructureWatchpointMap;
StructureWatchpointMap& llintGetByIdWatchpointMap() { return m_llintGetByIdWatchpointMap; }
// Functions for controlling when tiered compilation kicks in. This
// controls both when the optimizing compiler is invoked and when OSR
// entry happens. Two triggers exist: the loop trigger and the return
// trigger. In either case, when an addition to m_jitExecuteCounter
// causes it to become non-negative, the optimizing compiler is
// invoked. This includes a fast check to see if this CodeBlock has
// already been optimized (i.e. replacement() returns a CodeBlock
// that was optimized with a higher tier JIT than this one). In the
// case of the loop trigger, if the optimized compilation succeeds
// (or has already succeeded in the past) then OSR is attempted to
// redirect program flow into the optimized code.
// These functions are called from within the optimization triggers,
// and are used as a single point at which we define the heuristics
// for how much warm-up is mandated before the next optimization
// trigger files. All CodeBlocks start out with optimizeAfterWarmUp(),
// as this is called from the CodeBlock constructor.
// When we observe a lot of speculation failures, we trigger a
// reoptimization. But each time, we increase the optimization trigger
// to avoid thrashing.
JS_EXPORT_PRIVATE unsigned reoptimizationRetryCounter() const;
void countReoptimization();
#if !ENABLE(C_LOOP)
static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return RegisterSetBuilder::llintBaselineCalleeSaveRegisters().numberOfSetRegisters(); }
static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters();
static size_t calleeSaveSpaceAsVirtualRegisters(const RegisterAtOffsetList&);
#else
static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return 0; }
static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters() { return 1; };
static size_t calleeSaveSpaceAsVirtualRegisters(const RegisterAtOffsetList&) { return 0; }
#endif
#if ENABLE(JIT)
unsigned numberOfDFGCompiles();
int32_t codeTypeThresholdMultiplier() const;
int32_t adjustedCounterValue(int32_t desiredThreshold);
const BaselineExecutionCounter& baselineExecuteCounter();
unsigned optimizationDelayCounter() const { return m_optimizationDelayCounter; }
// Check if the optimization threshold has been reached, and if not,
// adjust the heuristics accordingly. Returns true if the threshold has
// been reached.
bool checkIfOptimizationThresholdReached();
// Call this to force the next optimization trigger to fire. This is
// rarely wise, since optimization triggers are typically more
// expensive than executing baseline code.
void optimizeNextInvocation();
// Call this to prevent optimization from happening again. Note that
// optimization will still happen after roughly 2^29 invocations,
// so this is really meant to delay that as much as possible. This
// is called if optimization failed, and we expect it to fail in
// the future as well.
void dontOptimizeAnytimeSoon();
// Call this to reinitialize the counter to its starting state,
// forcing a warm-up to happen before the next optimization trigger
// fires. This is called in the CodeBlock constructor. It also
// makes sense to call this if an OSR exit occurred. Note that
// OSR exit code is code generated, so the value of the execute
// counter that this corresponds to is also available directly.
void optimizeAfterWarmUp();
// Call this to force an optimization trigger to fire only after
// a lot of warm-up.
void optimizeAfterLongWarmUp();
// Call this to cause an optimization trigger to fire soon, but
// not necessarily the next one. This makes sense if optimization
// succeeds. Successful optimization means that all calls are
// relinked to the optimized code, so this only affects call
// frames that are still executing this CodeBlock. The value here
// is tuned to strike a balance between the cost of OSR entry
// (which is too high to warrant making every loop back edge to
// trigger OSR immediately) and the cost of executing baseline
// code (which is high enough that we don't necessarily want to
// have a full warm-up). The intuition for calling this instead of
// optimizeNextInvocation() is for the case of recursive functions
// with loops. Consider that there may be N call frames of some
// recursive function, for a reasonably large value of N. The top
// one triggers optimization, and then returns, and then all of
// the others return. We don't want optimization to be triggered on
// each return, as that would be superfluous. It only makes sense
// to trigger optimization if one of those functions becomes hot
// in the baseline code.
void optimizeSoon();
void forceOptimizationSlowPathConcurrently();
void setOptimizationThresholdBasedOnCompilationResult(CompilationResult);
BytecodeIndex bytecodeIndexForExit(BytecodeIndex) const;
uint32_t osrExitCounter() const { return m_osrExitCounter; }
void countOSRExit() { m_osrExitCounter++; }
static constexpr ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
uint32_t adjustedExitCountThreshold(uint32_t desiredThreshold);
uint32_t exitCountThresholdForReoptimization();
uint32_t exitCountThresholdForReoptimizationFromLoop();
bool shouldReoptimizeNow();
bool shouldReoptimizeFromLoopNow();
#else // No JIT
void optimizeAfterWarmUp() { }
unsigned numberOfDFGCompiles() { return 0; }
#endif
bool shouldOptimizeNowFromBaseline();
void updateAllNonLazyValueProfilePredictions(const ConcurrentJSLocker&);
void updateAllLazyValueProfilePredictions(const ConcurrentJSLocker&);
void updateAllArrayProfilePredictions();
void updateAllArrayAllocationProfilePredictions();
void updateAllPredictions();
unsigned frameRegisterCount();
int stackPointerOffset();
bool hasOpDebugForLineAndColumn(unsigned line, std::optional<unsigned> column);
bool hasDebuggerRequests() const { return m_debuggerRequests; }
static constexpr ptrdiff_t offsetOfDebuggerRequests() { return OBJECT_OFFSETOF(CodeBlock, m_debuggerRequests); }
void addBreakpoint(unsigned numBreakpoints);
void removeBreakpoint(unsigned numBreakpoints)
{
ASSERT(m_numBreakpoints >= numBreakpoints);
m_numBreakpoints -= numBreakpoints;
}
bool isJettisoned() const { return m_isJettisoned; }
enum SteppingMode {
SteppingModeDisabled,
SteppingModeEnabled
};
void setSteppingMode(SteppingMode);
void clearDebuggerRequests()
{
m_steppingMode = SteppingModeDisabled;
m_numBreakpoints = 0;
}
bool wasCompiledWithDebuggingOpcodes() const { return m_unlinkedCode->wasCompiledWithDebuggingOpcodes(); }
// This is intentionally public; it's the responsibility of anyone doing any
// of the following to hold the lock:
//
// - Modifying any inline cache in this code block.
//
// - Quering any inline cache in this code block, from a thread other than
// the main thread.
//
// Additionally, it's only legal to modify the inline cache on the main
// thread. This means that the main thread can query the inline cache without
// locking. This is crucial since executing the inline cache is effectively
// "querying" it.
//
// Another exception to the rules is that the GC can do whatever it wants
// without holding any locks, because the GC is guaranteed to wait until any
// concurrent compilation threads finish what they're doing.
mutable ConcurrentJSLock m_lock;
bool m_shouldAlwaysBeInlined { true }; // Not a bitfield because the JIT wants to store to it.
#if USE(JSVALUE64)
// 64bit environment does not need a lock for ValueProfile operations.
NoLockingNecessaryTag valueProfileLock() { return NoLockingNecessary; }
#else
ConcurrentJSLock& valueProfileLock() { return m_lock; }
#endif
static constexpr ptrdiff_t offsetOfShouldAlwaysBeInlined() { return OBJECT_OFFSETOF(CodeBlock, m_shouldAlwaysBeInlined); }
#if ENABLE(JIT)
unsigned m_capabilityLevelState : 2; // DFG::CapabilityLevel
#endif
bool m_didFailJITCompilation : 1;
bool m_didFailFTLCompilation : 1;
bool m_hasBeenCompiledWithFTL : 1;
bool m_isJettisoned : 1;
bool m_visitChildrenSkippedDueToOldAge { false };
// Internal methods for use by validation code. It would be private if it wasn't
// for the fact that we use it from anonymous namespaces.
void beginValidationDidFail();
NO_RETURN_DUE_TO_CRASH void endValidationDidFail();
struct RareData {
WTF_MAKE_STRUCT_FAST_ALLOCATED_WITH_HEAP_IDENTIFIER(CodeBlockRareData);
public:
Vector<HandlerInfo> m_exceptionHandlers;
DirectEvalCodeCache m_directEvalCodeCache;
};
void clearExceptionHandlers()
{
if (m_rareData)
m_rareData->m_exceptionHandlers.clear();
}
void appendExceptionHandler(const HandlerInfo& handler)
{
createRareDataIfNecessary(); // We may be handling the exception of an inlined call frame.
m_rareData->m_exceptionHandlers.append(handler);
}
DisposableCallSiteIndex newExceptionHandlingCallSiteIndex(CallSiteIndex originalCallSite);
void ensureCatchLivenessIsComputedForBytecodeIndex(BytecodeIndex);
bool hasTailCalls() const { return m_unlinkedCode->hasTailCalls(); }
template<typename Metadata>
Metadata& metadata(OpcodeID opcodeID, unsigned metadataID)
{
ASSERT(m_metadata);
ASSERT_UNUSED(opcodeID, opcodeID == Metadata::opcodeID);
return m_metadata->get<Metadata>()[metadataID];
}
template<typename Metadata>
ptrdiff_t offsetInMetadataTable(Metadata* metadata)
{
return std::bit_cast<uint8_t*>(metadata) - std::bit_cast<uint8_t*>(metadataTable());
}
size_t metadataSizeInBytes()
{
return m_unlinkedCode->metadataSizeInBytes();
}
MetadataTable* metadataTable() { return m_metadata.get(); }
const void* instructionsRawPointer() { return m_instructionsRawPointer; }
static constexpr ptrdiff_t offsetOfMetadataTable() { return OBJECT_OFFSETOF(CodeBlock, m_metadata); }
static constexpr ptrdiff_t offsetOfInstructionsRawPointer() { return OBJECT_OFFSETOF(CodeBlock, m_instructionsRawPointer); }
bool loopHintsAreEligibleForFuzzingEarlyReturn() { return m_unlinkedCode->loopHintsAreEligibleForFuzzingEarlyReturn(); }
double optimizationThresholdScalingFactor() const;
protected:
void finalizeLLIntInlineCaches();
#if ENABLE(JIT)
void finalizeJITInlineCaches();
#endif
#if ENABLE(DFG_JIT)
void tallyFrequentExitSites();
#else
void tallyFrequentExitSites() { }
#endif
private:
friend class CodeBlockSet;
friend class FunctionExecutable;
friend class ScriptExecutable;
template<typename Visitor> ALWAYS_INLINE void visitChildren(Visitor&);
BytecodeLivenessAnalysis& livenessAnalysisSlow();
CodeBlock* specialOSREntryBlockOrNull();
void noticeIncomingCall(JSCell* caller);
void updateAllNonLazyValueProfilePredictionsAndCountLiveness(const ConcurrentJSLocker&, unsigned& numberOfLiveNonArgumentValueProfiles, unsigned& numberOfSamplesInProfiles);
Vector<unsigned> setConstantRegisters(const FixedVector<WriteBarrier<Unknown>>& constants, const FixedVector<SourceCodeRepresentation>& constantsSourceCodeRepresentation);
void initializeTemplateObjects(ScriptExecutable* topLevelExecutable, const Vector<unsigned>& templateObjectIndices);
void replaceConstant(VirtualRegister reg, JSValue value)
{
ASSERT(reg.isConstant() && static_cast<size_t>(reg.toConstantIndex()) < m_constantRegisters.size());
m_constantRegisters[reg.toConstantIndex()].set(*m_vm, this, value);
}
template<typename Visitor> bool shouldVisitStrongly(const ConcurrentJSLocker&, Visitor&);
bool shouldJettisonDueToWeakReference(VM&);
template<typename Visitor> bool shouldJettisonDueToOldAge(const ConcurrentJSLocker&, Visitor&);
template<typename Visitor> void propagateTransitions(const ConcurrentJSLocker&, Visitor&);
template<typename Visitor> void determineLiveness(const ConcurrentJSLocker&, Visitor&);
template<typename Visitor> void stronglyVisitStrongReferences(const ConcurrentJSLocker&, Visitor&);
template<typename Visitor> void stronglyVisitWeakReferences(const ConcurrentJSLocker&, Visitor&);
template<typename Visitor> void visitOSRExitTargets(const ConcurrentJSLocker&, Visitor&);
unsigned numberOfNonArgumentValueProfiles() { return totalNumberOfValueProfiles() - numberOfArgumentValueProfiles(); }
unsigned totalNumberOfValueProfiles() { return m_unlinkedCode->numberOfValueProfiles(); }
Seconds timeSinceCreation()
{
return ApproximateTime::now() - m_creationTime;
}
void createRareDataIfNecessary()
{
if (!m_rareData) {
auto rareData = makeUnique<RareData>();
WTF::storeStoreFence();
m_rareData = WTFMove(rareData);
}
}
void insertBasicBlockBoundariesForControlFlowProfiler();
void ensureCatchLivenessIsComputedForBytecodeIndexSlow(const OpCatch&, BytecodeIndex);
template<typename Func>
void forEachStructureStubInfo(Func);
const unsigned m_numCalleeLocals;
const unsigned m_numVars;
unsigned m_numParameters;
unsigned m_numberOfArgumentsToSkip : 31 { 0 };
unsigned m_couldBeTainted : 1 { 0 };
uint32_t m_osrExitCounter { 0 };
union {
unsigned m_debuggerRequests;
struct {
unsigned m_hasDebuggerStatement : 1;
unsigned m_steppingMode : 1;
unsigned m_numBreakpoints : 30;
};
};
unsigned m_bytecodeCost { 0 };
VirtualRegister m_scopeRegister;
mutable CodeBlockHash m_hash;
WriteBarrier<UnlinkedCodeBlock> m_unlinkedCode;
WriteBarrier<ScriptExecutable> m_ownerExecutable;
// m_vm must be a pointer (instead of a reference) because the JSCLLIntOffsetsExtractor
// cannot handle it being a reference.
VM* const m_vm;
const void* const m_instructionsRawPointer { nullptr };
SentinelLinkedList<CallLinkInfoBase, BasicRawSentinelNode<CallLinkInfoBase>> m_incomingCalls;
uint16_t m_optimizationDelayCounter { 0 };
uint16_t m_reoptimizationRetryCounter { 0 };
float m_previousCounter { 0 };
StructureWatchpointMap m_llintGetByIdWatchpointMap;
RefPtr<JSC::JITCode> m_jitCode;
#if ENABLE(JIT)
public:
void* m_jitData { nullptr };
private:
#endif
RefPtr<MetadataTable> m_metadata;
#if ENABLE(DFG_JIT)
// This is relevant to non-DFG code blocks that serve as the profiled code block
// for DFG code blocks.
CompressedLazyValueProfileHolder m_lazyValueProfiles;
#endif
FixedVector<ArgumentValueProfile> m_argumentValueProfiles;
// Constant Pool
static_assert(sizeof(Register) == sizeof(WriteBarrier<Unknown>), "Register must be same size as WriteBarrier Unknown");
// FIXME: This could just be a pointer to m_unlinkedCodeBlock's data, but the DFG mutates
// it, so we're stuck with it for now.
Vector<WriteBarrier<Unknown>> m_constantRegisters;
FixedVector<WriteBarrier<FunctionExecutable>> m_functionDecls;
FixedVector<WriteBarrier<FunctionExecutable>> m_functionExprs;
WriteBarrier<CodeBlock> m_alternative;
ApproximateTime m_creationTime;
std::unique_ptr<RareData> m_rareData;
#if ASSERT_ENABLED
Lock m_cachedIdentifierUidsLock;
UncheckedKeyHashSet<UniquedStringImpl*> m_cachedIdentifierUids;
uint32_t m_magic;
#endif
};
/* This check is for normal Release builds; ASSERT_ENABLED changes the size. */
#if !ASSERT_ENABLED
static_assert(sizeof(CodeBlock) <= 224, "Keep it small for memory saving");
#endif
template <typename ExecutableType>
void ScriptExecutable::prepareForExecution(VM& vm, JSFunction* function, JSScope* scope, CodeSpecializationKind kind, CodeBlock*& resultCodeBlock)
{
if (hasJITCodeFor(kind)) {
if constexpr (std::is_same<ExecutableType, EvalExecutable>::value)
resultCodeBlock = jsCast<CodeBlock*>(jsCast<ExecutableType*>(this)->codeBlock());
else if constexpr (std::is_same<ExecutableType, ProgramExecutable>::value)
resultCodeBlock = jsCast<CodeBlock*>(jsCast<ExecutableType*>(this)->codeBlock());
else if constexpr (std::is_same<ExecutableType, ModuleProgramExecutable>::value)
resultCodeBlock = jsCast<CodeBlock*>(jsCast<ExecutableType*>(this)->codeBlock());
else {
static_assert(std::is_same<ExecutableType, FunctionExecutable>::value);
resultCodeBlock = jsCast<CodeBlock*>(jsCast<ExecutableType*>(this)->codeBlockFor(kind));
}
return;
}
prepareForExecutionImpl(vm, function, scope, kind, resultCodeBlock);
}
#define CODEBLOCK_LOG_EVENT(codeBlock, summary, details) \
do { \
if (codeBlock) \
(codeBlock->vm().logEvent(codeBlock, summary, [&] () { return toCString details; })); \
} while (0)
void setPrinter(Printer::PrintRecord&, CodeBlock*);
} // namespace JSC
namespace WTF {
JS_EXPORT_PRIVATE void printInternal(PrintStream&, JSC::CodeBlock*);
} // namespace WTF
WTF_ALLOW_UNSAFE_BUFFER_USAGE_END
|