1
0
Fork 0
mirror of https://github.com/VSadov/Satori.git synced 2025-06-08 03:27:04 +09:00

Change temporary entrypoints to be lazily allocated (#101580)

* WorkingOnIt

* It basically works for a single example.

Baseline
Loader Heap:
----------------------------------------
System Domain:        7ffab916ec00
LoaderAllocator:      7ffab916ec00
LowFrequencyHeap:     Size: 0xf0000 (983040) bytes total.
HighFrequencyHeap:    Size: 0x16a000 (1482752) bytes total, 0x3000 (12288) bytes wasted.
StubHeap:             Size: 0x1000 (4096) bytes total.
FixupPrecodeHeap:     Size: 0x168000 (1474560) bytes total.
NewStubPrecodeHeap:   Size: 0x18000 (98304) bytes total.
IndirectionCellHeap:  Size: 0x1000 (4096) bytes total.
CacheEntryHeap:       Size: 0x1000 (4096) bytes total.
Total size:           Size: 0x3dd000 (4050944) bytes total, 0x3000 (12288) bytes wasted.

Compare
Loader Heap:
----------------------------------------
System Domain:        7ff9eb49dc00
LoaderAllocator:      7ff9eb49dc00
LowFrequencyHeap:     Size: 0xef000 (978944) bytes total.
HighFrequencyHeap:    Size: 0x1b2000 (1777664) bytes total, 0x3000 (12288) bytes wasted.
StubHeap:             Size: 0x1000 (4096) bytes total.
FixupPrecodeHeap:     Size: 0x70000 (458752) bytes total.
NewStubPrecodeHeap:   Size: 0x10000 (65536) bytes total.
IndirectionCellHeap:  Size: 0x1000 (4096) bytes total.
CacheEntryHeap:       Size: 0x1000 (4096) bytes total.
Total size:           Size: 0x324000 (3293184) bytes total, 0x3000 (12288) bytes wasted.

LowFrequencyHeap is 4KB bigger
HighFrequencyHeap is 288KB bigger
FixupPrecodeHeap is 992KB smaller
NewstubPrecodeHeap is 32KB smaller

* If there isn't a parent methodtable and the slot matches... then it by definition the method is defining the slot

* Fix a couple more issues found when running a subset of the coreclr tests

* Get X86 building again

* Attempt to use a consistent api to force slots to be set

* Put cache around RequiresStableEntryPoint

* Fix typo

* Fix interop identified issue where we sometime set a non Precode into an interface

* Move ARM and X86 to disable compact entry points

* Attempt to fix build breaks

* fix typo

* Fix another Musl validation issue

* More tweaks around NULL handling

* Hopefully the last NULL issue

* Fix more NULL issues

* Fixup obvious issues

* Fix allocation behavior so we don't free the data too early or too late

* Fix musl validation issue

* Fix tiered compilation

* Remove Compact Entrypoint logic

* Add new ISOSDacInterface15 api

* Fix some naming of NoAlloc to a more clear IfExists suffix

* Remove way in which GetTemporaryEntryPoint behaves differently for DAC builds, and then remove GetTemporaryEntrypoint usage from DAC entirely in favor of GetTemporaryEntryPointIfExists

* Attempt to reduce most of the use of EnsureSlotFilled. Untested, but its late.

* Fix the build before sending to github

* Fix unix build break, and invalid assert

* Improve assertion checks to validate that we don't allocate temporary entrypoints that will be orphaned if the type doesn't actually end up published.

* Remove unused parameters and add contracts

* Update method-descriptor.md

* Fix musl validation issue

* Adjust SOS api to be an enumerator

* Fix assertion issues noted
Fix ISOSDacInterface15 to actually work

* Remove GetRestoredSlotIfExists
- Its the same as GetSlot ....  just replace it with that function.

* Update src/coreclr/debug/daccess/daccess.cpp

Co-authored-by: Jan Kotas <jkotas@microsoft.com>

* Update docs/design/coreclr/botr/method-descriptor.md

Co-authored-by: Jan Kotas <jkotas@microsoft.com>

* Update src/coreclr/vm/methodtable.inl

Co-authored-by: Jan Kotas <jkotas@microsoft.com>

* Update src/coreclr/vm/methodtable.h

Co-authored-by: Jan Kotas <jkotas@microsoft.com>

* Fix GetMethodDescForSlot_NoThrow
Try removing EnsureSlotFilled
Implement IsEligibleForTieredCompilation in terms of IsEligibleForTieredCompilation_NoCheckMethodDescChunk

* Fix missing change intended in last commit

* Fix some more IsPublished memory use issues

* Call the right GetSlot method

* Move another scenario to NoThrow, I think this should clear up our tests...

* Add additional IsPublished check

* Fix MUSL validation build error and Windows x86 build error

* Address code review feedback

* Fix classcompat build

* Update src/coreclr/vm/method.cpp

Co-authored-by: Aaron Robinson <arobins@microsoft.com>

* Remove assert that is invalid because TryGetMulticCallableAddrOfCode can return NULL ... and then another thread could produce a stable entrypoint and the assert could lose the race

* Final (hopefully) code review tweaks.

* Its possible for GetOrCreatePrecode to be called for cases where it isn't REQUIRED. we need to handle that case.

---------

Co-authored-by: Jan Kotas <jkotas@microsoft.com>
Co-authored-by: Aaron Robinson <arobins@microsoft.com>
This commit is contained in:
David Wrighton 2024-07-14 12:20:21 -07:00 committed by GitHub
parent 00bddf9e13
commit dacf9dbdd4
Signed by: github
GPG key ID: B5690EEEBB952194
42 changed files with 1186 additions and 1091 deletions

View file

@ -85,7 +85,9 @@ DWORD MethodDesc::GetAttrs()
Method Slots Method Slots
------------ ------------
Each MethodDesc has a slot, which contains the entry point of the method. The slot and entry point must exist for all methods, even the ones that never run like abstract methods. There are multiple places in the runtime that depend on the 1:1 mapping between entry points and MethodDescs, making this relationship an invariant. Each MethodDesc has a slot, which contains the current entry point of the method. The slot must exist for all methods, even the ones that never run like abstract methods. There are multiple places in the runtime that depend on mapping between entry points and MethodDescs.
Each MethodDesc logically has an entry point, but we do not allocate these eagerly at MethodDesc creation time. The invariant is that once the method is identified as a method to run, or is used in virtual overriding, we will allocate the entrypoint.
The slot is either in MethodTable or in MethodDesc itself. The location of the slot is determined by `mdcHasNonVtableSlot` bit on MethodDesc. The slot is either in MethodTable or in MethodDesc itself. The location of the slot is determined by `mdcHasNonVtableSlot` bit on MethodDesc.
@ -185,8 +187,6 @@ The target of the temporary entry point is a PreStub, which is a special kind of
The **stable entry point** is either the native code or the precode. The **native code** is either jitted code or code saved in NGen image. It is common to talk about jitted code when we actually mean native code. The **stable entry point** is either the native code or the precode. The **native code** is either jitted code or code saved in NGen image. It is common to talk about jitted code when we actually mean native code.
Temporary entry points are never saved into NGen images. All entry points in NGen images are stable entry points that are never changed. It is an important optimization that reduced private working set.
![Figure 2](images/methoddesc-fig2.png) ![Figure 2](images/methoddesc-fig2.png)
Figure 2 Entry Point State Diagram Figure 2 Entry Point State Diagram
@ -208,6 +208,7 @@ The methods to get callable entry points from MethodDesc are:
- `MethodDesc::GetSingleCallableAddrOfCode` - `MethodDesc::GetSingleCallableAddrOfCode`
- `MethodDesc::GetMultiCallableAddrOfCode` - `MethodDesc::GetMultiCallableAddrOfCode`
- `MethodDesc::TryGetMultiCallableAddrOfCode`
- `MethodDesc::GetSingleCallableAddrOfVirtualizedCode` - `MethodDesc::GetSingleCallableAddrOfVirtualizedCode`
- `MethodDesc::GetMultiCallableAddrOfVirtualizedCode` - `MethodDesc::GetMultiCallableAddrOfVirtualizedCode`
@ -220,7 +221,7 @@ The type of precode has to be cheaply computable from the instruction sequence.
**StubPrecode** **StubPrecode**
StubPrecode is the basic precode type. It loads MethodDesc into a scratch register and then jumps. It must be implemented for precodes to work. It is used as fallback when no other specialized precode type is available. StubPrecode is the basic precode type. It loads MethodDesc into a scratch register<sup>2</sup> and then jumps. It must be implemented for precodes to work. It is used as fallback when no other specialized precode type is available.
All other precodes types are optional optimizations that the platform specific files turn on via HAS\_XXX\_PRECODE defines. All other precodes types are optional optimizations that the platform specific files turn on via HAS\_XXX\_PRECODE defines.
@ -236,7 +237,7 @@ StubPrecode looks like this on x86:
FixupPrecode is used when the final target does not require MethodDesc in scratch register<sup>2</sup>. The FixupPrecode saves a few cycles by avoiding loading MethodDesc into the scratch register. FixupPrecode is used when the final target does not require MethodDesc in scratch register<sup>2</sup>. The FixupPrecode saves a few cycles by avoiding loading MethodDesc into the scratch register.
The most common usage of FixupPrecode is for method fixups in NGen images. Most stubs used are the more efficient form, we currently can use this form for everything but interop methods when a specialized form of Precode is not required.
The initial state of the FixupPrecode on x86: The initial state of the FixupPrecode on x86:
@ -254,67 +255,6 @@ Once it has been patched to point to final target:
<sup>2</sup> Passing MethodDesc in scratch register is sometimes referred to as **MethodDesc Calling Convention**. <sup>2</sup> Passing MethodDesc in scratch register is sometimes referred to as **MethodDesc Calling Convention**.
**FixupPrecode chunks**
FixupPrecode chunk is a space efficient representation of multiple FixupPrecodes. It mirrors the idea of MethodDescChunk by hoisting the similar MethodDesc pointers from multiple FixupPrecodes to a shared area.
The FixupPrecode chunk saves space and improves code density of the precodes. The code density improvement from FixupPrecode chunks resulted in 1% - 2% gain in big server scenarios on x64.
The FixupPrecode chunks looks like this on x86:
jmp Target2
pop edi // dummy instruction that marks the type of the precode
db MethodDescChunkIndex
db 2 (PrecodeChunkIndex)
jmp Target1
pop edi
db MethodDescChunkIndex
db 1 (PrecodeChunkIndex)
jmp Target0
pop edi
db MethodDescChunkIndex
db 0 (PrecodeChunkIndex)
dw pMethodDescBase
One FixupPrecode chunk corresponds to one MethodDescChunk. There is no 1:1 mapping between the FixupPrecodes in the chunk and MethodDescs in MethodDescChunk though. Each FixupPrecode has index of the method it belongs to. It allows allocating the FixupPrecode in the chunk only for methods that need it.
**Compact entry points**
Compact entry point is a space efficient implementation of temporary entry points.
Temporary entry points implemented using StubPrecode or FixupPrecode can be patched to point to the actual code. Jitted code can call temporary entry point directly. The temporary entry point can be multicallable entry points in this case.
Compact entry points cannot be patched to point to the actual code. Jitted code cannot call them directly. They are trading off speed for size. Calls to these entry points are indirected via slots in a table (FuncPtrStubs) that are patched to point to the actual entry point eventually. A request for a multicallable entry point allocates a StubPrecode or FixupPrecode on demand in this case.
The raw speed difference is the cost of an indirect call for a compact entry point vs. the cost of one direct call and one direct jump on the given platform. The later used to be faster by a few percent in large server scenario since it can be predicted by the hardware better (2005). It is not always the case on current (2015) hardware.
The compact entry points have been historically implemented on x86 only. Their additional complexity, space vs. speed trade-off and hardware advancements made them unjustified on other platforms.
The compact entry point on x86 looks like this:
entrypoint0:
mov al,0
jmp short Dispatch
entrypoint1:
mov al,1
jmp short Dispatch
entrypoint2:
mov al,2
jmp short Dispatch
Dispatch:
movzx eax,al
shl eax, 3
add eax, pBaseMD
jmp PreStub
The allocation of temporary entry points always tries to pick the smallest temporary entry point from the available choices. For example, a single compact entry point is bigger than a single StubPrecode on x86. The StubPrecode will be preferred over the compact entry point in this case. The allocation of the precode for a stable entry point will try to reuse an allocated temporary entry point precode if one exists of the matching type.
**ThisPtrRetBufPrecode** **ThisPtrRetBufPrecode**
ThisPtrRetBufPrecode is used to switch a return buffer and the this pointer for open instance delegates returning valuetypes. It is used to convert the calling convention of MyValueType Bar(Foo x) to the calling convention of MyValueType Foo::Bar(). ThisPtrRetBufPrecode is used to switch a return buffer and the this pointer for open instance delegates returning valuetypes. It is used to convert the calling convention of MyValueType Bar(Foo x) to the calling convention of MyValueType Foo::Bar().

View file

@ -3239,6 +3239,10 @@ ClrDataAccess::QueryInterface(THIS_
{ {
ifaceRet = static_cast<ISOSDacInterface14*>(this); ifaceRet = static_cast<ISOSDacInterface14*>(this);
} }
else if (IsEqualIID(interfaceId, __uuidof(ISOSDacInterface15)))
{
ifaceRet = static_cast<ISOSDacInterface15*>(this);
}
else else
{ {
*iface = NULL; *iface = NULL;
@ -8341,6 +8345,44 @@ HRESULT DacMemoryEnumerator::Next(unsigned int count, SOSMemoryRegion regions[],
return i < count ? S_FALSE : S_OK; return i < count ? S_FALSE : S_OK;
} }
HRESULT DacMethodTableSlotEnumerator::Skip(unsigned int count)
{
mIteratorIndex += count;
return S_OK;
}
HRESULT DacMethodTableSlotEnumerator::Reset()
{
mIteratorIndex = 0;
return S_OK;
}
HRESULT DacMethodTableSlotEnumerator::GetCount(unsigned int* pCount)
{
if (!pCount)
return E_POINTER;
*pCount = mMethods.GetCount();
return S_OK;
}
HRESULT DacMethodTableSlotEnumerator::Next(unsigned int count, SOSMethodData methods[], unsigned int* pFetched)
{
if (!pFetched)
return E_POINTER;
if (!methods)
return E_POINTER;
unsigned int i = 0;
while (i < count && mIteratorIndex < mMethods.GetCount())
{
methods[i++] = mMethods.Get(mIteratorIndex++);
}
*pFetched = i;
return i < count ? S_FALSE : S_OK;
}
HRESULT DacGCBookkeepingEnumerator::Init() HRESULT DacGCBookkeepingEnumerator::Init()
{ {

View file

@ -818,7 +818,8 @@ class ClrDataAccess
public ISOSDacInterface11, public ISOSDacInterface11,
public ISOSDacInterface12, public ISOSDacInterface12,
public ISOSDacInterface13, public ISOSDacInterface13,
public ISOSDacInterface14 public ISOSDacInterface14,
public ISOSDacInterface15
{ {
public: public:
ClrDataAccess(ICorDebugDataTarget * pTarget, ICLRDataTarget * pLegacyTarget=0); ClrDataAccess(ICorDebugDataTarget * pTarget, ICLRDataTarget * pLegacyTarget=0);
@ -1223,6 +1224,9 @@ public:
virtual HRESULT STDMETHODCALLTYPE GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress); virtual HRESULT STDMETHODCALLTYPE GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress);
virtual HRESULT STDMETHODCALLTYPE GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus); virtual HRESULT STDMETHODCALLTYPE GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus);
// ISOSDacInterface15
virtual HRESULT STDMETHODCALLTYPE GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator);
// //
// ClrDataAccess. // ClrDataAccess.
// //
@ -1993,6 +1997,29 @@ private:
unsigned int mIteratorIndex; unsigned int mIteratorIndex;
}; };
class DacMethodTableSlotEnumerator : public DefaultCOMImpl<ISOSMethodEnum, IID_ISOSMethodEnum>
{
public:
DacMethodTableSlotEnumerator() : mIteratorIndex(0)
{
}
virtual ~DacMethodTableSlotEnumerator() {}
HRESULT Init(PTR_MethodTable mTable);
HRESULT STDMETHODCALLTYPE Skip(unsigned int count);
HRESULT STDMETHODCALLTYPE Reset();
HRESULT STDMETHODCALLTYPE GetCount(unsigned int *pCount);
HRESULT STDMETHODCALLTYPE Next(unsigned int count, SOSMethodData methods[], unsigned int *pFetched);
protected:
DacReferenceList<SOSMethodData> mMethods;
private:
unsigned int mIteratorIndex;
};
class DacHandleTableMemoryEnumerator : public DacMemoryEnumerator class DacHandleTableMemoryEnumerator : public DacMemoryEnumerator
{ {
public: public:

View file

@ -214,11 +214,15 @@ BOOL DacValidateMD(PTR_MethodDesc pMD)
if (retval) if (retval)
{ {
MethodDesc *pMDCheck = MethodDesc::GetMethodDescFromStubAddr(pMD->GetTemporaryEntryPoint(), TRUE); PCODE tempEntryPoint = pMD->GetTemporaryEntryPointIfExists();
if (tempEntryPoint != (PCODE)NULL)
if (PTR_HOST_TO_TADDR(pMD) != PTR_HOST_TO_TADDR(pMDCheck))
{ {
retval = FALSE; MethodDesc *pMDCheck = MethodDesc::GetMethodDescFromStubAddr(tempEntryPoint, TRUE);
if (PTR_HOST_TO_TADDR(pMD) != PTR_HOST_TO_TADDR(pMDCheck))
{
retval = FALSE;
}
} }
} }
@ -419,7 +423,11 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA
else if (slot < mTable->GetNumVtableSlots()) else if (slot < mTable->GetNumVtableSlots())
{ {
// Now get the slot: // Now get the slot:
*value = mTable->GetRestoredSlot(slot); *value = mTable->GetSlot(slot);
if (*value == 0)
{
hr = S_FALSE;
}
} }
else else
{ {
@ -430,8 +438,16 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA
MethodDesc * pMD = it.GetMethodDesc(); MethodDesc * pMD = it.GetMethodDesc();
if (pMD->GetSlot() == slot) if (pMD->GetSlot() == slot)
{ {
*value = pMD->GetMethodEntryPoint(); *value = pMD->GetMethodEntryPointIfExists();
hr = S_OK; if (*value == 0)
{
hr = S_FALSE;
}
else
{
hr = S_OK;
}
break;
} }
} }
} }
@ -440,6 +456,89 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA
return hr; return hr;
} }
HRESULT
ClrDataAccess::GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator)
{
if (mt == 0 || enumerator == NULL)
return E_INVALIDARG;
SOSDacEnter();
PTR_MethodTable mTable = PTR_MethodTable(TO_TADDR(mt));
BOOL bIsFree = FALSE;
if (!DacValidateMethodTable(mTable, bIsFree))
{
hr = E_INVALIDARG;
}
else
{
DacMethodTableSlotEnumerator *methodTableSlotEnumerator = new (nothrow) DacMethodTableSlotEnumerator();
*enumerator = methodTableSlotEnumerator;
if (*enumerator == NULL)
{
hr = E_OUTOFMEMORY;
}
else
{
hr = methodTableSlotEnumerator->Init(mTable);
}
}
SOSDacLeave();
return hr;
}
HRESULT DacMethodTableSlotEnumerator::Init(PTR_MethodTable mTable)
{
unsigned int slot = 0;
WORD numVtableSlots = mTable->GetNumVtableSlots();
while (slot < numVtableSlots)
{
MethodDesc* pMD = mTable->GetMethodDescForSlot_NoThrow(slot);
SOSMethodData methodData = {0};
methodData.MethodDesc = HOST_CDADDR(pMD);
methodData.Entrypoint = mTable->GetSlot(slot);
methodData.DefininingMethodTable = PTR_CDADDR(pMD->GetMethodTable());
methodData.DefiningModule = HOST_CDADDR(pMD->GetModule());
methodData.Token = pMD->GetMemberDef();
methodData.Slot = slot++;
if (!mMethods.Add(methodData))
return E_OUTOFMEMORY;
}
MethodTable::IntroducedMethodIterator it(mTable);
for (; it.IsValid(); it.Next())
{
MethodDesc* pMD = it.GetMethodDesc();
WORD slot = pMD->GetSlot();
if (slot >= numVtableSlots)
{
SOSMethodData methodData = {0};
methodData.MethodDesc = HOST_CDADDR(pMD);
methodData.Entrypoint = pMD->GetMethodEntryPointIfExists();
methodData.DefininingMethodTable = PTR_CDADDR(pMD->GetMethodTable());
methodData.DefiningModule = HOST_CDADDR(pMD->GetModule());
methodData.Token = pMD->GetMemberDef();
if (slot == MethodTable::NO_SLOT)
{
methodData.Slot = 0xFFFFFFFF;
}
else
{
methodData.Slot = slot;
}
if (!mMethods.Add(methodData))
return E_OUTOFMEMORY;
}
}
return S_OK;
}
HRESULT HRESULT
ClrDataAccess::GetCodeHeapList(CLRDATA_ADDRESS jitManager, unsigned int count, struct DacpJitCodeHeapInfo codeHeaps[], unsigned int *pNeeded) ClrDataAccess::GetCodeHeapList(CLRDATA_ADDRESS jitManager, unsigned int count, struct DacpJitCodeHeapInfo codeHeaps[], unsigned int *pNeeded)

View file

@ -894,7 +894,7 @@ enum CORINFO_ACCESS_FLAGS
{ {
CORINFO_ACCESS_ANY = 0x0000, // Normal access CORINFO_ACCESS_ANY = 0x0000, // Normal access
CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference
// UNUSED = 0x0002, CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT = 0x0002, // Prefer access to a method via slot over using the temporary entrypoint
CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null

View file

@ -13,10 +13,6 @@
DEFINE_DACGFN(DACNotifyCompilationFinished) DEFINE_DACGFN(DACNotifyCompilationFinished)
DEFINE_DACGFN(ThePreStub) DEFINE_DACGFN(ThePreStub)
#ifdef TARGET_ARM
DEFINE_DACGFN(ThePreStubCompactARM)
#endif
DEFINE_DACGFN(ThePreStubPatchLabel) DEFINE_DACGFN(ThePreStubPatchLabel)
#ifdef FEATURE_COMINTEROP #ifdef FEATURE_COMINTEROP
DEFINE_DACGFN(Unknown_AddRef) DEFINE_DACGFN(Unknown_AddRef)

View file

@ -519,3 +519,46 @@ interface ISOSDacInterface14 : IUnknown
HRESULT GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress); HRESULT GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress);
HRESULT GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus); HRESULT GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus);
} }
cpp_quote("#ifndef _SOS_MethodData")
cpp_quote("#define _SOS_MethodData")
typedef struct _SOSMethodData
{
// At least one of MethodDesc, Entrypoint, or Token/DefiningMethodTable/DefiningModule is guaranteed to be set.
// Multiple of them may be set as well
CLRDATA_ADDRESS MethodDesc;
CLRDATA_ADDRESS Entrypoint;
CLRDATA_ADDRESS DefininingMethodTable; // Useful for when the method is inherited from a parent type which is instantiated
CLRDATA_ADDRESS DefiningModule;
unsigned int Token;
// Slot data, a given MethodDesc may be present in multiple slots for a single MethodTable
unsigned int Slot; // Will be set to 0xFFFFFFFF for EnC added methods
} SOSMethodData;
cpp_quote("#endif //_SOS_MethodData")
[
object,
local,
uuid(3c0fe725-c324-4a4f-8100-d399588a662e)
]
interface ISOSMethodEnum : ISOSEnum
{
HRESULT Next([in] unsigned int count,
[out, size_is(count), length_is(*pNeeded)] SOSMethodData handles[],
[out] unsigned int *pNeeded);
}
[
object,
local,
uuid(7ed81261-52a9-4a23-a358-c3313dea30a8)
]
interface ISOSDacInterface15 : IUnknown
{
HRESULT GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator);
}

View file

@ -121,6 +121,12 @@ MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface13,0x3176a8ed,0x597b,0x4f54,0xa7,0x1f,
MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface14,0x9aa22aca,0x6dc6,0x4a0c,0xb4,0xe0,0x70,0xd2,0x41,0x6b,0x98,0x37); MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface14,0x9aa22aca,0x6dc6,0x4a0c,0xb4,0xe0,0x70,0xd2,0x41,0x6b,0x98,0x37);
MIDL_DEFINE_GUID(IID, IID_ISOSMethodEnum,0x3c0fe725,0xc324,0x4a4f,0x81,0x00,0xd3,0x99,0x58,0x8a,0x66,0x2e);
MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface15,0x7ed81261,0x52a9,0x4a23,0xa3,0x58,0xc3,0x31,0x3d,0xea,0x30,0xa8);
#undef MIDL_DEFINE_GUID #undef MIDL_DEFINE_GUID
#ifdef __cplusplus #ifdef __cplusplus

View file

@ -3333,6 +3333,27 @@ EXTERN_C const IID IID_ISOSDacInterface13;
#define ISOSDacInterface13_TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) \ #define ISOSDacInterface13_TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) \
( (This)->lpVtbl -> TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) ) ( (This)->lpVtbl -> TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) )
#define ISOSDacInterface13_GetDomainLoaderAllocator(This,domainAddress,pLoaderAllocator) \
( (This)->lpVtbl -> GetDomainLoaderAllocator(This,domainAddress,pLoaderAllocator) )
#define ISOSDacInterface13_GetLoaderAllocatorHeapNames(This,count,ppNames,pNeeded) \
( (This)->lpVtbl -> GetLoaderAllocatorHeapNames(This,count,ppNames,pNeeded) )
#define ISOSDacInterface13_GetLoaderAllocatorHeaps(This,loaderAllocator,count,pLoaderHeaps,pKinds,pNeeded) \
( (This)->lpVtbl -> GetLoaderAllocatorHeaps(This,loaderAllocator,count,pLoaderHeaps,pKinds,pNeeded) )
#define ISOSDacInterface13_GetHandleTableMemoryRegions(This,ppEnum) \
( (This)->lpVtbl -> GetHandleTableMemoryRegions(This,ppEnum) )
#define ISOSDacInterface13_GetGCBookkeepingMemoryRegions(This,ppEnum) \
( (This)->lpVtbl -> GetGCBookkeepingMemoryRegions(This,ppEnum) )
#define ISOSDacInterface13_GetGCFreeRegions(This,ppEnum) \
( (This)->lpVtbl -> GetGCFreeRegions(This,ppEnum) )
#define ISOSDacInterface13_LockedFlush(This) \
( (This)->lpVtbl -> LockedFlush(This) )
#endif /* COBJMACROS */ #endif /* COBJMACROS */
@ -3456,6 +3477,214 @@ EXTERN_C const IID IID_ISOSDacInterface14;
#endif /* __ISOSDacInterface14_INTERFACE_DEFINED__ */ #endif /* __ISOSDacInterface14_INTERFACE_DEFINED__ */
/* interface __MIDL_itf_sospriv_0000_0019 */
/* [local] */
#ifndef _SOS_MethodData
#define _SOS_MethodData
typedef struct _SOSMethodData
{
CLRDATA_ADDRESS MethodDesc;
CLRDATA_ADDRESS Entrypoint;
CLRDATA_ADDRESS DefininingMethodTable;
CLRDATA_ADDRESS DefiningModule;
unsigned int Token;
unsigned int Slot;
} SOSMethodData;
#endif //_SOS_MethodData
extern RPC_IF_HANDLE __MIDL_itf_sospriv_0000_0019_v0_0_c_ifspec;
extern RPC_IF_HANDLE __MIDL_itf_sospriv_0000_0019_v0_0_s_ifspec;
#ifndef __ISOSMethodEnum_INTERFACE_DEFINED__
#define __ISOSMethodEnum_INTERFACE_DEFINED__
/* interface ISOSMethodEnum */
/* [uuid][local][object] */
EXTERN_C const IID IID_ISOSMethodEnum;
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("3c0fe725-c324-4a4f-8100-d399588a662e")
ISOSMethodEnum : public ISOSEnum
{
public:
virtual HRESULT STDMETHODCALLTYPE Next(
/* [in] */ unsigned int count,
/* [length_is][size_is][out] */ SOSMethodData handles[ ],
/* [out] */ unsigned int *pNeeded) = 0;
};
#else /* C style interface */
typedef struct ISOSMethodEnumVtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *QueryInterface )(
ISOSMethodEnum * This,
/* [in] */ REFIID riid,
/* [annotation][iid_is][out] */
_COM_Outptr_ void **ppvObject);
ULONG ( STDMETHODCALLTYPE *AddRef )(
ISOSMethodEnum * This);
ULONG ( STDMETHODCALLTYPE *Release )(
ISOSMethodEnum * This);
HRESULT ( STDMETHODCALLTYPE *Skip )(
ISOSMethodEnum * This,
/* [in] */ unsigned int count);
HRESULT ( STDMETHODCALLTYPE *Reset )(
ISOSMethodEnum * This);
HRESULT ( STDMETHODCALLTYPE *GetCount )(
ISOSMethodEnum * This,
/* [out] */ unsigned int *pCount);
HRESULT ( STDMETHODCALLTYPE *Next )(
ISOSMethodEnum * This,
/* [in] */ unsigned int count,
/* [length_is][size_is][out] */ SOSMethodData handles[ ],
/* [out] */ unsigned int *pNeeded);
END_INTERFACE
} ISOSMethodEnumVtbl;
interface ISOSMethodEnum
{
CONST_VTBL struct ISOSMethodEnumVtbl *lpVtbl;
};
#ifdef COBJMACROS
#define ISOSMethodEnum_QueryInterface(This,riid,ppvObject) \
( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) )
#define ISOSMethodEnum_AddRef(This) \
( (This)->lpVtbl -> AddRef(This) )
#define ISOSMethodEnum_Release(This) \
( (This)->lpVtbl -> Release(This) )
#define ISOSMethodEnum_Skip(This,count) \
( (This)->lpVtbl -> Skip(This,count) )
#define ISOSMethodEnum_Reset(This) \
( (This)->lpVtbl -> Reset(This) )
#define ISOSMethodEnum_GetCount(This,pCount) \
( (This)->lpVtbl -> GetCount(This,pCount) )
#define ISOSMethodEnum_Next(This,count,handles,pNeeded) \
( (This)->lpVtbl -> Next(This,count,handles,pNeeded) )
#endif /* COBJMACROS */
#endif /* C style interface */
#endif /* __ISOSMethodEnum_INTERFACE_DEFINED__ */
#ifndef __ISOSDacInterface15_INTERFACE_DEFINED__
#define __ISOSDacInterface15_INTERFACE_DEFINED__
/* interface ISOSDacInterface15 */
/* [uuid][local][object] */
EXTERN_C const IID IID_ISOSDacInterface15;
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("7ed81261-52a9-4a23-a358-c3313dea30a8")
ISOSDacInterface15 : public IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE GetMethodTableSlotEnumerator(
CLRDATA_ADDRESS mt,
ISOSMethodEnum **enumerator) = 0;
};
#else /* C style interface */
typedef struct ISOSDacInterface15Vtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *QueryInterface )(
ISOSDacInterface15 * This,
/* [in] */ REFIID riid,
/* [annotation][iid_is][out] */
_COM_Outptr_ void **ppvObject);
ULONG ( STDMETHODCALLTYPE *AddRef )(
ISOSDacInterface15 * This);
ULONG ( STDMETHODCALLTYPE *Release )(
ISOSDacInterface15 * This);
HRESULT ( STDMETHODCALLTYPE *GetMethodTableSlotEnumerator )(
ISOSDacInterface15 * This,
CLRDATA_ADDRESS mt,
ISOSMethodEnum **enumerator);
END_INTERFACE
} ISOSDacInterface15Vtbl;
interface ISOSDacInterface15
{
CONST_VTBL struct ISOSDacInterface15Vtbl *lpVtbl;
};
#ifdef COBJMACROS
#define ISOSDacInterface15_QueryInterface(This,riid,ppvObject) \
( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) )
#define ISOSDacInterface15_AddRef(This) \
( (This)->lpVtbl -> AddRef(This) )
#define ISOSDacInterface15_Release(This) \
( (This)->lpVtbl -> Release(This) )
#define ISOSDacInterface15_GetMethodTableSlotEnumerator(This,mt,enumerator) \
( (This)->lpVtbl -> GetMethodTableSlotEnumerator(This,mt,enumerator) )
#endif /* COBJMACROS */
#endif /* C style interface */
#endif /* __ISOSDacInterface15_INTERFACE_DEFINED__ */
/* Additional Prototypes for ALL interfaces */ /* Additional Prototypes for ALL interfaces */
/* end of Additional Prototypes */ /* end of Additional Prototypes */

View file

@ -546,7 +546,7 @@ namespace Internal.JitInterface
{ {
CORINFO_ACCESS_ANY = 0x0000, // Normal access CORINFO_ACCESS_ANY = 0x0000, // Normal access
CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference
// CORINFO_ACCESS_UNUSED = 0x0002, CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT = 0x0002, // Prefer access to a method via slot over using the temporary entrypoint
CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null

View file

@ -210,24 +210,6 @@ LOCAL_LABEL(LNullThis):
NESTED_END ThePreStub, _TEXT NESTED_END ThePreStub, _TEXT
// ------------------------------------------------------------------
NESTED_ENTRY ThePreStubCompactARM, _TEXT, NoHandler
// r12 - address of compact entry point + PC_REG_RELATIVE_OFFSET
PROLOG_WITH_TRANSITION_BLOCK
mov r0, r12
bl C_FUNC(PreStubGetMethodDescForCompactEntryPoint)
mov r12, r0 // pMethodDesc
EPILOG_WITH_TRANSITION_BLOCK_TAILCALL
b C_FUNC(ThePreStub)
NESTED_END ThePreStubCompactARM, _TEXT
// ------------------------------------------------------------------ // ------------------------------------------------------------------
// This method does nothing. It's just a fixed function for the debugger to put a breakpoint on. // This method does nothing. It's just a fixed function for the debugger to put a breakpoint on.
LEAF_ENTRY ThePreStubPatch, _TEXT LEAF_ENTRY ThePreStubPatch, _TEXT

View file

@ -71,8 +71,6 @@ EXTERN_C void checkStack(void);
#define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction #define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction
#define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction #define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction
#define HAS_COMPACT_ENTRYPOINTS 1
#define HAS_NDIRECT_IMPORT_PRECODE 1 #define HAS_NDIRECT_IMPORT_PRECODE 1
EXTERN_C void getFPReturn(int fpSize, INT64 *pRetVal); EXTERN_C void getFPReturn(int fpSize, INT64 *pRetVal);

View file

@ -1381,11 +1381,14 @@ VOID StubLinkerCPU::EmitShuffleThunk(ShuffleEntry *pShuffleEntryArray)
void StubLinkerCPU::ThumbEmitTailCallManagedMethod(MethodDesc *pMD) void StubLinkerCPU::ThumbEmitTailCallManagedMethod(MethodDesc *pMD)
{ {
STANDARD_VM_CONTRACT;
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
// Use direct call if possible. // Use direct call if possible.
if (pMD->HasStableEntryPoint()) if (multiCallableAddr != (PCODE)NULL)
{ {
// mov r12, #entry_point // mov r12, #entry_point
ThumbEmitMovConstant(ThumbReg(12), (TADDR)pMD->GetStableEntryPoint()); ThumbEmitMovConstant(ThumbReg(12), (TADDR)multiCallableAddr);
} }
else else
{ {

View file

@ -1614,6 +1614,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s
void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect)
{ {
STANDARD_VM_CONTRACT;
BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP;
if (!fTailCall) if (!fTailCall)
variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL);
@ -1626,10 +1628,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir
void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall)
{ {
STANDARD_VM_CONTRACT;
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
// Use direct call if possible. // Use direct call if possible.
if (pMD->HasStableEntryPoint()) if (multiCallableAddr != (PCODE)NULL)
{ {
EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE);
} }
else else
{ {

View file

@ -185,7 +185,6 @@ void ArrayClass::InitArrayMethodDesc(
PCCOR_SIGNATURE pShortSig, PCCOR_SIGNATURE pShortSig,
DWORD cShortSig, DWORD cShortSig,
DWORD dwVtableSlot, DWORD dwVtableSlot,
LoaderAllocator *pLoaderAllocator,
AllocMemTracker *pamTracker) AllocMemTracker *pamTracker)
{ {
STANDARD_VM_CONTRACT; STANDARD_VM_CONTRACT;
@ -198,7 +197,7 @@ void ArrayClass::InitArrayMethodDesc(
pNewMD->SetStoredMethodSig(pShortSig, cShortSig); pNewMD->SetStoredMethodSig(pShortSig, cShortSig);
_ASSERTE(!pNewMD->MayHaveNativeCode()); _ASSERTE(!pNewMD->MayHaveNativeCode());
pNewMD->SetTemporaryEntryPoint(pLoaderAllocator, pamTracker); pNewMD->SetTemporaryEntryPoint(pamTracker);
#ifdef _DEBUG #ifdef _DEBUG
_ASSERTE(pNewMD->GetMethodName() && GetDebugClassName()); _ASSERTE(pNewMD->GetMethodName() && GetDebugClassName());
@ -509,7 +508,7 @@ MethodTable* Module::CreateArrayMethodTable(TypeHandle elemTypeHnd, CorElementTy
pClass->GenerateArrayAccessorCallSig(dwFuncRank, dwFuncType, &pSig, &cSig, pAllocator, pamTracker, FALSE); pClass->GenerateArrayAccessorCallSig(dwFuncRank, dwFuncType, &pSig, &cSig, pAllocator, pamTracker, FALSE);
pClass->InitArrayMethodDesc(pNewMD, pSig, cSig, numVirtuals + dwMethodIndex, pAllocator, pamTracker); pClass->InitArrayMethodDesc(pNewMD, pSig, cSig, numVirtuals + dwMethodIndex, pamTracker);
dwMethodIndex++; dwMethodIndex++;
} }

View file

@ -801,7 +801,7 @@ HRESULT EEClass::AddMethodDesc(
COMMA_INDEBUG(NULL) COMMA_INDEBUG(NULL)
); );
pNewMD->SetTemporaryEntryPoint(pAllocator, &dummyAmTracker); pNewMD->SetTemporaryEntryPoint(&dummyAmTracker);
// [TODO] if an exception is thrown, asserts will fire in EX_CATCH_HRESULT() // [TODO] if an exception is thrown, asserts will fire in EX_CATCH_HRESULT()
// during an EnC operation due to the debugger thread not being able to // during an EnC operation due to the debugger thread not being able to
@ -1407,7 +1407,7 @@ void ClassLoader::ValidateMethodsWithCovariantReturnTypes(MethodTable* pMT)
{ {
// The real check is that the MethodDesc's must not match, but a simple VTable check will // The real check is that the MethodDesc's must not match, but a simple VTable check will
// work most of the time, and is far faster than the GetMethodDescForSlot method. // work most of the time, and is far faster than the GetMethodDescForSlot method.
_ASSERTE(pMT->GetMethodDescForSlot(i) == pParentMT->GetMethodDescForSlot(i)); _ASSERTE(pMT->GetMethodDescForSlot_NoThrow(i) == pParentMT->GetMethodDescForSlot_NoThrow(i));
continue; continue;
} }
MethodDesc* pMD = pMT->GetMethodDescForSlot(i); MethodDesc* pMD = pMT->GetMethodDescForSlot(i);
@ -1525,7 +1525,7 @@ void ClassLoader::PropagateCovariantReturnMethodImplSlots(MethodTable* pMT)
{ {
// The real check is that the MethodDesc's must not match, but a simple VTable check will // The real check is that the MethodDesc's must not match, but a simple VTable check will
// work most of the time, and is far faster than the GetMethodDescForSlot method. // work most of the time, and is far faster than the GetMethodDescForSlot method.
_ASSERTE(pMT->GetMethodDescForSlot(i) == pParentMT->GetMethodDescForSlot(i)); _ASSERTE(pMT->GetMethodDescForSlot_NoThrow(i) == pParentMT->GetMethodDescForSlot_NoThrow(i));
continue; continue;
} }
@ -1575,7 +1575,7 @@ void ClassLoader::PropagateCovariantReturnMethodImplSlots(MethodTable* pMT)
// This is a vtable slot that needs to be updated to the new overriding method because of the // This is a vtable slot that needs to be updated to the new overriding method because of the
// presence of the attribute. // presence of the attribute.
pMT->SetSlot(j, pMT->GetSlot(i)); pMT->SetSlot(j, pMT->GetSlot(i));
_ASSERT(pMT->GetMethodDescForSlot(j) == pMD); _ASSERT(pMT->GetMethodDescForSlot_NoThrow(j) == pMD);
if (!hMTData.IsNull()) if (!hMTData.IsNull())
hMTData->UpdateImplMethodDesc(pMD, j); hMTData->UpdateImplMethodDesc(pMD, j);

View file

@ -1983,7 +1983,6 @@ public:
PCCOR_SIGNATURE pShortSig, PCCOR_SIGNATURE pShortSig,
DWORD cShortSig, DWORD cShortSig,
DWORD dwVtableSlot, DWORD dwVtableSlot,
LoaderAllocator *pLoaderAllocator,
AllocMemTracker *pamTracker); AllocMemTracker *pamTracker);
// Generate a short sig for an array accessor // Generate a short sig for an array accessor
@ -2064,17 +2063,6 @@ inline PCODE GetPreStubEntryPoint()
return GetEEFuncEntryPoint(ThePreStub); return GetEEFuncEntryPoint(ThePreStub);
} }
#if defined(HAS_COMPACT_ENTRYPOINTS) && defined(TARGET_ARM)
EXTERN_C void STDCALL ThePreStubCompactARM();
inline PCODE GetPreStubCompactARMEntryPoint()
{
return GetEEFuncEntryPoint(ThePreStubCompactARM);
}
#endif // defined(HAS_COMPACT_ENTRYPOINTS) && defined(TARGET_ARM)
PCODE TheUMThunkPreStub(); PCODE TheUMThunkPreStub();
PCODE TheVarargNDirectStub(BOOL hasRetBuffArg); PCODE TheVarargNDirectStub(BOOL hasRetBuffArg);

View file

@ -2775,6 +2775,16 @@ TypeHandle ClassLoader::PublishType(const TypeKey *pTypeKey, TypeHandle typeHnd)
} }
CONTRACTL_END; CONTRACTL_END;
#ifdef _DEBUG
if (!typeHnd.IsTypeDesc())
{
// The IsPublished flag is used by various asserts to assure that allocations of
// MethodTable associated memory which do not use the AllocMemTracker of the MethodTableBuilder
// aren't permitted until the MethodTable is in a state where the MethodTable object
// cannot be freed (except by freeing an entire LoaderAllocator)
typeHnd.AsMethodTable()->GetAuxiliaryDataForWrite()->SetIsPublished();
}
#endif
if (pTypeKey->IsConstructed()) if (pTypeKey->IsConstructed())
{ {

View file

@ -1503,7 +1503,7 @@ extern "C" void QCALLTYPE Interlocked_MemoryBarrierProcessWide()
static BOOL HasOverriddenMethod(MethodTable* mt, MethodTable* classMT, WORD methodSlot) static BOOL HasOverriddenMethod(MethodTable* mt, MethodTable* classMT, WORD methodSlot)
{ {
CONTRACTL{ CONTRACTL{
NOTHROW; THROWS;
GC_NOTRIGGER; GC_NOTRIGGER;
MODE_ANY; MODE_ANY;
} CONTRACTL_END; } CONTRACTL_END;
@ -1811,7 +1811,7 @@ static WORD g_slotBeginWrite, g_slotEndWrite;
static bool HasOverriddenStreamMethod(MethodTable * pMT, WORD slot) static bool HasOverriddenStreamMethod(MethodTable * pMT, WORD slot)
{ {
CONTRACTL{ CONTRACTL{
NOTHROW; THROWS;
GC_NOTRIGGER; GC_NOTRIGGER;
MODE_ANY; MODE_ANY;
} CONTRACTL_END; } CONTRACTL_END;

View file

@ -189,7 +189,7 @@ void DynamicMethodTable::AddMethodsToList()
pResolver->m_DynamicMethodTable = this; pResolver->m_DynamicMethodTable = this;
pNewMD->m_pResolver = pResolver; pNewMD->m_pResolver = pResolver;
pNewMD->SetTemporaryEntryPoint(m_pDomain->GetLoaderAllocator(), &amt); pNewMD->SetTemporaryEntryPoint(&amt);
#ifdef _DEBUG #ifdef _DEBUG
pNewMD->m_pDebugMethodTable = m_pMethodTable; pNewMD->m_pDebugMethodTable = m_pMethodTable;

View file

@ -566,7 +566,7 @@ BOOL PrestubMethodFrame::TraceFrame(Thread *thread, BOOL fromPatch,
// native code versions, even if they aren't the one that was reported by this trace, see // native code versions, even if they aren't the one that was reported by this trace, see
// DebuggerController::PatchTrace() under case TRACE_MANAGED. This alleviates the StubManager from having to prevent the // DebuggerController::PatchTrace() under case TRACE_MANAGED. This alleviates the StubManager from having to prevent the
// race that occurs here. // race that occurs here.
trace->InitForStub(GetFunction()->GetMethodEntryPoint()); trace->InitForStub(GetFunction()->GetMethodEntryPointIfExists());
} }
else else
{ {
@ -612,7 +612,7 @@ MethodDesc* StubDispatchFrame::GetFunction()
{ {
if (m_pRepresentativeMT != NULL) if (m_pRepresentativeMT != NULL)
{ {
pMD = m_pRepresentativeMT->GetMethodDescForSlot(m_representativeSlot); pMD = m_pRepresentativeMT->GetMethodDescForSlot_NoThrow(m_representativeSlot);
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
m_pMD = pMD; m_pMD = pMD;
#endif #endif

View file

@ -440,7 +440,7 @@ InstantiatedMethodDesc::NewInstantiatedMethodDesc(MethodTable *pExactMT,
// Check that whichever field holds the inst. got setup correctly // Check that whichever field holds the inst. got setup correctly
_ASSERTE((PVOID)pNewMD->GetMethodInstantiation().GetRawArgs() == (PVOID)pInstOrPerInstInfo); _ASSERTE((PVOID)pNewMD->GetMethodInstantiation().GetRawArgs() == (PVOID)pInstOrPerInstInfo);
pNewMD->SetTemporaryEntryPoint(pAllocator, &amt); pNewMD->SetTemporaryEntryPoint(&amt);
{ {
// The canonical instantiation is exempt from constraint checks. It's used as the basis // The canonical instantiation is exempt from constraint checks. It's used as the basis
@ -905,7 +905,7 @@ MethodDesc::FindOrCreateAssociatedMethodDesc(MethodDesc* pDefMD,
pResultMD->SetIsUnboxingStub(); pResultMD->SetIsUnboxingStub();
pResultMD->AsInstantiatedMethodDesc()->SetupWrapperStubWithInstantiations(pMDescInCanonMT, 0, NULL); pResultMD->AsInstantiatedMethodDesc()->SetupWrapperStubWithInstantiations(pMDescInCanonMT, 0, NULL);
pResultMD->SetTemporaryEntryPoint(pAllocator, &amt); pResultMD->SetTemporaryEntryPoint(&amt);
amt.SuppressRelease(); amt.SuppressRelease();
@ -986,7 +986,7 @@ MethodDesc::FindOrCreateAssociatedMethodDesc(MethodDesc* pDefMD,
pNonUnboxingStub->GetNumGenericMethodArgs(), pNonUnboxingStub->GetNumGenericMethodArgs(),
(TypeHandle *)pNonUnboxingStub->GetMethodInstantiation().GetRawArgs()); (TypeHandle *)pNonUnboxingStub->GetMethodInstantiation().GetRawArgs());
pResultMD->SetTemporaryEntryPoint(pAllocator, &amt); pResultMD->SetTemporaryEntryPoint(&amt);
amt.SuppressRelease(); amt.SuppressRelease();

View file

@ -51,8 +51,6 @@ EXTERN_C void SinglecastDelegateInvokeStub();
#define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction #define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction
#define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction #define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction
#define HAS_COMPACT_ENTRYPOINTS 1
// Needed for PInvoke inlining in ngened images // Needed for PInvoke inlining in ngened images
#define HAS_NDIRECT_IMPORT_PRECODE 1 #define HAS_NDIRECT_IMPORT_PRECODE 1

View file

@ -2977,9 +2977,13 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s
#ifdef TARGET_AMD64 #ifdef TARGET_AMD64
VOID StubLinkerCPU::EmitLoadMethodAddressIntoAX(MethodDesc *pMD) VOID StubLinkerCPU::EmitLoadMethodAddressIntoAX(MethodDesc *pMD)
{ {
if (pMD->HasStableEntryPoint()) STANDARD_VM_CONTRACT;
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
if (multiCallableAddr != (PCODE)NULL)
{ {
X86EmitRegLoad(kRAX, pMD->GetStableEntryPoint());// MOV RAX, DWORD X86EmitRegLoad(kRAX, multiCallableAddr);// MOV RAX, DWORD
} }
else else
{ {
@ -2992,14 +2996,17 @@ VOID StubLinkerCPU::EmitLoadMethodAddressIntoAX(MethodDesc *pMD)
VOID StubLinkerCPU::EmitTailJumpToMethod(MethodDesc *pMD) VOID StubLinkerCPU::EmitTailJumpToMethod(MethodDesc *pMD)
{ {
STANDARD_VM_CONTRACT;
#ifdef TARGET_AMD64 #ifdef TARGET_AMD64
EmitLoadMethodAddressIntoAX(pMD); EmitLoadMethodAddressIntoAX(pMD);
Emit16(X86_INSTR_JMP_EAX); Emit16(X86_INSTR_JMP_EAX);
#else #else
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
// Use direct call if possible // Use direct call if possible
if (pMD->HasStableEntryPoint()) if (multiCallableAddr != (PCODE)NULL)
{ {
X86EmitNearJump(NewExternalCodeLabel((LPVOID) pMD->GetStableEntryPoint())); X86EmitNearJump(NewExternalCodeLabel((LPVOID)multiCallableAddr));
} }
else else
{ {

View file

@ -193,7 +193,7 @@ MethodDesc* ILStubCache::CreateNewMethodDesc(LoaderHeap* pCreationHeap, MethodTa
// the no metadata part of the method desc // the no metadata part of the method desc
pMD->m_pszMethodName = (PTR_CUTF8)"IL_STUB"; pMD->m_pszMethodName = (PTR_CUTF8)"IL_STUB";
pMD->InitializeFlags(DynamicMethodDesc::FlagPublic | DynamicMethodDesc::FlagIsILStub); pMD->InitializeFlags(DynamicMethodDesc::FlagPublic | DynamicMethodDesc::FlagIsILStub);
pMD->SetTemporaryEntryPoint(pMT->GetLoaderAllocator(), pamTracker); pMD->SetTemporaryEntryPoint(pamTracker);
// //
// convert signature to a compatible signature if needed // convert signature to a compatible signature if needed

View file

@ -5313,7 +5313,7 @@ HCIMPL3(void, JIT_VTableProfile32, Object* obj, CORINFO_METHOD_HANDLE baseMethod
WORD slot = pBaseMD->GetSlot(); WORD slot = pBaseMD->GetSlot();
_ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals()); _ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals());
MethodDesc* pMD = pMT->GetMethodDescForSlot(slot); MethodDesc* pMD = pMT->GetMethodDescForSlot_NoThrow(slot);
MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE; MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE;
if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod()) if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod())
@ -5362,7 +5362,7 @@ HCIMPL3(void, JIT_VTableProfile64, Object* obj, CORINFO_METHOD_HANDLE baseMethod
WORD slot = pBaseMD->GetSlot(); WORD slot = pBaseMD->GetSlot();
_ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals()); _ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals());
MethodDesc* pMD = pMT->GetMethodDescForSlot(slot); MethodDesc* pMD = pMT->GetMethodDescForSlot_NoThrow(slot);
MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE; MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE;
if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod()) if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod())

View file

@ -8574,14 +8574,15 @@ void CEEInfo::getMethodVTableOffset (CORINFO_METHOD_HANDLE methodHnd,
bool * isRelative) bool * isRelative)
{ {
CONTRACTL { CONTRACTL {
NOTHROW; THROWS;
GC_NOTRIGGER; GC_TRIGGERS;
MODE_PREEMPTIVE; MODE_PREEMPTIVE;
} CONTRACTL_END; } CONTRACTL_END;
JIT_TO_EE_TRANSITION_LEAF(); JIT_TO_EE_TRANSITION();
MethodDesc* method = GetMethod(methodHnd); MethodDesc* method = GetMethod(methodHnd);
method->EnsureTemporaryEntryPoint();
//@GENERICS: shouldn't be doing this for instantiated methods as they live elsewhere //@GENERICS: shouldn't be doing this for instantiated methods as they live elsewhere
_ASSERTE(!method->HasMethodInstantiation()); _ASSERTE(!method->HasMethodInstantiation());
@ -8595,7 +8596,7 @@ void CEEInfo::getMethodVTableOffset (CORINFO_METHOD_HANDLE methodHnd,
*pOffsetAfterIndirection = MethodTable::GetIndexAfterVtableIndirection(method->GetSlot()) * TARGET_POINTER_SIZE /* sizeof(MethodTable::VTableIndir2_t) */; *pOffsetAfterIndirection = MethodTable::GetIndexAfterVtableIndirection(method->GetSlot()) * TARGET_POINTER_SIZE /* sizeof(MethodTable::VTableIndir2_t) */;
*isRelative = false; *isRelative = false;
EE_TO_JIT_TRANSITION_LEAF(); EE_TO_JIT_TRANSITION();
} }
/*********************************************************************/ /*********************************************************************/

View file

@ -1465,6 +1465,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s
void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect)
{ {
STANDARD_VM_CONTRACT;
BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP;
if (!fTailCall) if (!fTailCall)
variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL);
@ -1477,10 +1479,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir
void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall)
{ {
STANDARD_VM_CONTRACT;
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
// Use direct call if possible. // Use direct call if possible.
if (pMD->HasStableEntryPoint()) if (multiCallableAddr != (PCODE)NULL)
{ {
EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE);
} }
else else
{ {

File diff suppressed because it is too large Load diff

View file

@ -162,8 +162,7 @@ enum MethodDescFlags
struct MethodDescCodeData final struct MethodDescCodeData final
{ {
PTR_MethodDescVersioningState VersioningState; PTR_MethodDescVersioningState VersioningState;
PCODE TemporaryEntryPoint;
// [TODO] Move temporary entry points here.
}; };
using PTR_MethodDescCodeData = DPTR(MethodDescCodeData); using PTR_MethodDescCodeData = DPTR(MethodDescCodeData);
@ -208,26 +207,72 @@ public:
_ASSERTE(HasStableEntryPoint()); _ASSERTE(HasStableEntryPoint());
_ASSERTE(!IsVersionableWithVtableSlotBackpatch()); _ASSERTE(!IsVersionableWithVtableSlotBackpatch());
return GetMethodEntryPoint(); return GetMethodEntryPointIfExists();
} }
void SetMethodEntryPoint(PCODE addr); void SetMethodEntryPoint(PCODE addr);
BOOL SetStableEntryPointInterlocked(PCODE addr); BOOL SetStableEntryPointInterlocked(PCODE addr);
#ifndef DACCESS_COMPILE
PCODE GetTemporaryEntryPoint(); PCODE GetTemporaryEntryPoint();
#endif
void SetTemporaryEntryPoint(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); PCODE GetTemporaryEntryPointIfExists()
PCODE GetInitialEntryPointForCopiedSlot()
{ {
WRAPPER_NO_CONTRACT; LIMITED_METHOD_CONTRACT;
BYTE flags4 = VolatileLoad(&m_bFlags4);
if (flags4 & enum_flag4_TemporaryEntryPointAssigned)
{
PTR_MethodDescCodeData codeData = VolatileLoadWithoutBarrier(&m_codeData);
_ASSERTE(codeData != NULL);
PCODE temporaryEntryPoint = codeData->TemporaryEntryPoint;
_ASSERTE(temporaryEntryPoint != (PCODE)NULL);
return temporaryEntryPoint;
}
else
{
return (PCODE)NULL;
}
}
void SetTemporaryEntryPoint(AllocMemTracker *pamTracker);
#ifndef DACCESS_COMPILE
PCODE GetInitialEntryPointForCopiedSlot(MethodTable *pMTBeingCreated, AllocMemTracker* pamTracker)
{
CONTRACTL
{
THROWS;
GC_NOTRIGGER;
MODE_ANY;
}
CONTRACTL_END;
if (pMTBeingCreated != GetMethodTable())
{
pamTracker = NULL;
}
// If EnsureTemporaryEntryPointCore is called, then
// both GetTemporaryEntryPointIfExists and GetSlot()
// are guaranteed to return a NON-NULL PCODE.
EnsureTemporaryEntryPointCore(pamTracker);
PCODE result;
if (IsVersionableWithVtableSlotBackpatch()) if (IsVersionableWithVtableSlotBackpatch())
{ {
return GetTemporaryEntryPoint(); result = GetTemporaryEntryPointIfExists();
} }
return GetMethodEntryPoint(); else
{
_ASSERTE(GetMethodTable()->IsCanonicalMethodTable());
result = GetMethodTable()->GetSlot(GetSlot());
}
_ASSERTE(result != (PCODE)NULL);
return result;
} }
#endif
inline BOOL HasPrecode() inline BOOL HasPrecode()
{ {
@ -270,6 +315,8 @@ public:
} }
Precode* GetOrCreatePrecode(); Precode* GetOrCreatePrecode();
void MarkPrecodeAsStableEntrypoint();
// Given a code address return back the MethodDesc whenever possible // Given a code address return back the MethodDesc whenever possible
// //
@ -616,7 +663,11 @@ public:
#endif // !FEATURE_COMINTEROP #endif // !FEATURE_COMINTEROP
// Update flags in a thread safe manner. // Update flags in a thread safe manner.
#ifndef DACCESS_COMPILE
WORD InterlockedUpdateFlags(WORD wMask, BOOL fSet); WORD InterlockedUpdateFlags(WORD wMask, BOOL fSet);
WORD InterlockedUpdateFlags3(WORD wMask, BOOL fSet);
BYTE InterlockedUpdateFlags4(BYTE bMask, BOOL fSet);
#endif
// If the method is in an Edit and Continue (EnC) module, then // If the method is in an Edit and Continue (EnC) module, then
// we DON'T want to backpatch this, ever. We MUST always call // we DON'T want to backpatch this, ever. We MUST always call
@ -635,11 +686,13 @@ public:
return (m_wFlags & mdfNotInline); return (m_wFlags & mdfNotInline);
} }
#ifndef DACCESS_COMPILE
inline void SetNotInline(BOOL set) inline void SetNotInline(BOOL set)
{ {
WRAPPER_NO_CONTRACT; WRAPPER_NO_CONTRACT;
InterlockedUpdateFlags(mdfNotInline, set); InterlockedUpdateFlags(mdfNotInline, set);
} }
#endif // DACCESS_COMPILE
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
VOID EnsureActive(); VOID EnsureActive();
@ -659,11 +712,13 @@ public:
//================================================================ //================================================================
// //
#ifndef DACCESS_COMPILE
inline void ClearFlagsOnUpdate() inline void ClearFlagsOnUpdate()
{ {
WRAPPER_NO_CONTRACT; WRAPPER_NO_CONTRACT;
SetNotInline(FALSE); SetNotInline(FALSE);
} }
#endif // DACCESS_COMPILE
// Restore the MethodDesc to it's initial, pristine state, so that // Restore the MethodDesc to it's initial, pristine state, so that
// it can be reused for new code (eg. for EnC, method rental, etc.) // it can be reused for new code (eg. for EnC, method rental, etc.)
@ -1070,16 +1125,8 @@ public:
public: public:
bool IsEligibleForTieredCompilation() bool IsEligibleForTieredCompilation();
{ bool IsEligibleForTieredCompilation_NoCheckMethodDescChunk();
LIMITED_METHOD_DAC_CONTRACT;
#ifdef FEATURE_TIERED_COMPILATION
return (m_wFlags3AndTokenRemainder & enum_flag3_IsEligibleForTieredCompilation) != 0;
#else
return false;
#endif
}
// This method must return the same value for all methods in one MethodDescChunk // This method must return the same value for all methods in one MethodDescChunk
bool DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk(); bool DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk();
@ -1188,6 +1235,7 @@ public:
private: private:
#ifndef DACCESS_COMPILE
// Gets the prestub entry point to use for backpatching. Entry point slot backpatch uses this entry point as an oracle to // Gets the prestub entry point to use for backpatching. Entry point slot backpatch uses this entry point as an oracle to
// determine if the entry point actually changed and warrants backpatching. // determine if the entry point actually changed and warrants backpatching.
PCODE GetPrestubEntryPointToBackpatch() PCODE GetPrestubEntryPointToBackpatch()
@ -1199,7 +1247,9 @@ private:
_ASSERTE(IsVersionableWithVtableSlotBackpatch()); _ASSERTE(IsVersionableWithVtableSlotBackpatch());
return GetTemporaryEntryPoint(); return GetTemporaryEntryPoint();
} }
#endif // DACCESS_COMPILE
#ifndef DACCESS_COMPILE
// Gets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry // Gets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry
// point as an oracle to determine if the entry point actually changed and warrants backpatching. // point as an oracle to determine if the entry point actually changed and warrants backpatching.
PCODE GetEntryPointToBackpatch_Locked() PCODE GetEntryPointToBackpatch_Locked()
@ -1212,6 +1262,7 @@ private:
_ASSERTE(IsVersionableWithVtableSlotBackpatch()); _ASSERTE(IsVersionableWithVtableSlotBackpatch());
return GetMethodEntryPoint(); return GetMethodEntryPoint();
} }
#endif // DACCESS_COMPILE
// Sets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry // Sets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry
// point as an oracle to determine if the entry point actually changed and warrants backpatching. // point as an oracle to determine if the entry point actually changed and warrants backpatching.
@ -1246,11 +1297,13 @@ public:
BackpatchEntryPointSlots(entryPoint, false /* isPrestubEntryPoint */); BackpatchEntryPointSlots(entryPoint, false /* isPrestubEntryPoint */);
} }
#ifndef DACCESS_COMPILE
void BackpatchToResetEntryPointSlots() void BackpatchToResetEntryPointSlots()
{ {
WRAPPER_NO_CONTRACT; WRAPPER_NO_CONTRACT;
BackpatchEntryPointSlots(GetPrestubEntryPointToBackpatch(), true /* isPrestubEntryPoint */); BackpatchEntryPointSlots(GetPrestubEntryPointToBackpatch(), true /* isPrestubEntryPoint */);
} }
#endif // DACCESS_COMPILE
private: private:
void BackpatchEntryPointSlots(PCODE entryPoint, bool isPrestubEntryPoint) void BackpatchEntryPointSlots(PCODE entryPoint, bool isPrestubEntryPoint)
@ -1358,6 +1411,7 @@ public:
ULONG GetRVA(); ULONG GetRVA();
public: public:
#ifndef DACCESS_COMPILE
// Returns address of code to call. The address is good for one immediate invocation only. // Returns address of code to call. The address is good for one immediate invocation only.
// Use GetMultiCallableAddrOfCode() to get address that can be invoked multiple times. // Use GetMultiCallableAddrOfCode() to get address that can be invoked multiple times.
// //
@ -1371,6 +1425,7 @@ public:
_ASSERTE(!IsGenericMethodDefinition()); _ASSERTE(!IsGenericMethodDefinition());
return GetMethodEntryPoint(); return GetMethodEntryPoint();
} }
#endif
// This one is used to implement "ldftn". // This one is used to implement "ldftn".
PCODE GetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags = CORINFO_ACCESS_LDFTN); PCODE GetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags = CORINFO_ACCESS_LDFTN);
@ -1392,6 +1447,7 @@ public:
PCODE GetSingleCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH); PCODE GetSingleCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH);
PCODE GetMultiCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH); PCODE GetMultiCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH);
#ifndef DACCESS_COMPILE
// The current method entrypoint. It is simply the value of the current method slot. // The current method entrypoint. It is simply the value of the current method slot.
// GetMethodEntryPoint() should be used to get an opaque method entrypoint, for instance // GetMethodEntryPoint() should be used to get an opaque method entrypoint, for instance
// when copying or searching vtables. It should not be used to get address to call. // when copying or searching vtables. It should not be used to get address to call.
@ -1399,7 +1455,26 @@ public:
// GetSingleCallableAddrOfCode() and GetStableEntryPoint() are aliases with stricter preconditions. // GetSingleCallableAddrOfCode() and GetStableEntryPoint() are aliases with stricter preconditions.
// Use of these aliases is as appropriate. // Use of these aliases is as appropriate.
// //
// Calling this function will allocate an Entrypoint and associate it with the MethodDesc if it
// doesn't already exist.
PCODE GetMethodEntryPoint(); PCODE GetMethodEntryPoint();
#endif
// The current method entrypoint. It is simply the value of the current method slot.
// GetMethodEntryPoint() should be used to get an opaque method entrypoint, for instance
// when copying or searching vtables. It should not be used to get address to call.
//
// GetSingleCallableAddrOfCode() and GetStableEntryPoint() are aliases with stricter preconditions.
// Use of these aliases is as appropriate.
//
PCODE GetMethodEntryPointIfExists();
// Ensure that the temporary entrypoint is allocated, and the slot is filled with some value
void EnsureTemporaryEntryPoint();
// pamTracker must be NULL for a MethodDesc which cannot be freed by an external AllocMemTracker
// OR must be set to point to the same AllocMemTracker that controls allocation of the MethodDesc
void EnsureTemporaryEntryPointCore(AllocMemTracker *pamTracker);
//******************************************************************************* //*******************************************************************************
// Returns the address of the native code. // Returns the address of the native code.
@ -1545,6 +1620,9 @@ public:
// Returns true if the method has to have stable entrypoint always. // Returns true if the method has to have stable entrypoint always.
BOOL RequiresStableEntryPoint(BOOL fEstimateForChunk = FALSE); BOOL RequiresStableEntryPoint(BOOL fEstimateForChunk = FALSE);
private:
BOOL RequiresStableEntryPointCore(BOOL fEstimateForChunk);
public:
// //
// Backpatch method slots // Backpatch method slots
@ -1616,7 +1694,15 @@ protected:
UINT16 m_wFlags3AndTokenRemainder; UINT16 m_wFlags3AndTokenRemainder;
BYTE m_chunkIndex; BYTE m_chunkIndex;
BYTE m_methodIndex; // Used to hold the index into the chunk of this MethodDesc. Currently all 8 bits are used, but we could likely work with only 7 bits
enum {
enum_flag4_ComputedRequiresStableEntryPoint = 0x01,
enum_flag4_RequiresStableEntryPoint = 0x02,
enum_flag4_TemporaryEntryPointAssigned = 0x04,
};
void InterlockedSetFlags4(BYTE mask, BYTE newValue);
BYTE m_bFlags4; // Used to hold more flags
WORD m_wSlotNumber; // The slot number of this MethodDesc in the vtable array. WORD m_wSlotNumber; // The slot number of this MethodDesc in the vtable array.
WORD m_wFlags; // See MethodDescFlags WORD m_wFlags; // See MethodDescFlags
@ -1627,21 +1713,10 @@ public:
void EnumMemoryRegions(CLRDataEnumMemoryFlags flags); void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
#endif #endif
BYTE GetMethodDescIndex()
{
LIMITED_METHOD_CONTRACT;
return m_methodIndex;
}
void SetMethodDescIndex(COUNT_T index)
{
LIMITED_METHOD_CONTRACT;
_ASSERTE(index <= 255);
m_methodIndex = (BYTE)index;
}
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
HRESULT EnsureCodeDataExists(); // pamTracker must be NULL for a MethodDesc which cannot be freed by an external AllocMemTracker
// OR must be set to point to the same AllocMemTracker that controls allocation of the MethodDesc
HRESULT EnsureCodeDataExists(AllocMemTracker *pamTracker);
HRESULT SetMethodDescVersionState(PTR_MethodDescVersioningState state); HRESULT SetMethodDescVersionState(PTR_MethodDescVersioningState state);
#endif //!DACCESS_COMPILE #endif //!DACCESS_COMPILE
@ -1737,19 +1812,19 @@ public:
SIZE_T SizeOf(); SIZE_T SizeOf();
WORD InterlockedUpdateFlags3(WORD wMask, BOOL fSet);
inline BOOL HaveValueTypeParametersBeenWalked() inline BOOL HaveValueTypeParametersBeenWalked()
{ {
LIMITED_METHOD_DAC_CONTRACT; LIMITED_METHOD_DAC_CONTRACT;
return (m_wFlags & mdfValueTypeParametersWalked) != 0; return (m_wFlags & mdfValueTypeParametersWalked) != 0;
} }
#ifndef DACCESS_COMPILE
inline void SetValueTypeParametersWalked() inline void SetValueTypeParametersWalked()
{ {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_CONTRACT;
InterlockedUpdateFlags(mdfValueTypeParametersWalked, TRUE); InterlockedUpdateFlags(mdfValueTypeParametersWalked, TRUE);
} }
#endif // DACCESS_COMPILE
inline BOOL HaveValueTypeParametersBeenLoaded() inline BOOL HaveValueTypeParametersBeenLoaded()
{ {
@ -1757,11 +1832,13 @@ public:
return (m_wFlags & mdfValueTypeParametersLoaded) != 0; return (m_wFlags & mdfValueTypeParametersLoaded) != 0;
} }
#ifndef DACCESS_COMPILE
inline void SetValueTypeParametersLoaded() inline void SetValueTypeParametersLoaded()
{ {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_CONTRACT;
InterlockedUpdateFlags(mdfValueTypeParametersLoaded, TRUE); InterlockedUpdateFlags(mdfValueTypeParametersLoaded, TRUE);
} }
#endif // DACCESS_COMPILE
#ifdef FEATURE_TYPEEQUIVALENCE #ifdef FEATURE_TYPEEQUIVALENCE
inline BOOL DoesNotHaveEquivalentValuetypeParameters() inline BOOL DoesNotHaveEquivalentValuetypeParameters()
@ -1770,11 +1847,13 @@ public:
return (m_wFlags & mdfDoesNotHaveEquivalentValuetypeParameters) != 0; return (m_wFlags & mdfDoesNotHaveEquivalentValuetypeParameters) != 0;
} }
#ifndef DACCESS_COMPILE
inline void SetDoesNotHaveEquivalentValuetypeParameters() inline void SetDoesNotHaveEquivalentValuetypeParameters()
{ {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_CONTRACT;
InterlockedUpdateFlags(mdfDoesNotHaveEquivalentValuetypeParameters, TRUE); InterlockedUpdateFlags(mdfDoesNotHaveEquivalentValuetypeParameters, TRUE);
} }
#endif // DACCESS_COMPILE
#endif // FEATURE_TYPEEQUIVALENCE #endif // FEATURE_TYPEEQUIVALENCE
// //
@ -2121,10 +2200,14 @@ class MethodDescChunk
// These are separate to allow the flags space available and used to be obvious here // These are separate to allow the flags space available and used to be obvious here
// and for the logic that splits the token to be algorithmically generated based on the // and for the logic that splits the token to be algorithmically generated based on the
// #define // #define
enum_flag_HasCompactEntrypoints = 0x4000, // Compact temporary entry points enum_flag_DeterminedIsEligibleForTieredCompilation = 0x4000, // Has this chunk had its methods been determined eligible for tiered compilation or not
// unused = 0x8000, // unused = 0x8000,
}; };
#ifndef DACCESS_COMPILE
WORD InterlockedUpdateFlags(WORD wMask, BOOL fSet);
#endif
public: public:
// //
// Allocates methodDescCount identical MethodDescs in smallest possible number of chunks. // Allocates methodDescCount identical MethodDescs in smallest possible number of chunks.
@ -2137,55 +2220,13 @@ public:
MethodTable *initialMT, MethodTable *initialMT,
class AllocMemTracker *pamTracker); class AllocMemTracker *pamTracker);
TADDR GetTemporaryEntryPoints() bool DeterminedIfMethodsAreEligibleForTieredCompilation()
{ {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_DAC_CONTRACT;
return *(dac_cast<DPTR(TADDR)>(this) - 1); return (VolatileLoadWithoutBarrier(&m_flagsAndTokenRange) & enum_flag_DeterminedIsEligibleForTieredCompilation) != 0;
} }
PCODE GetTemporaryEntryPoint(int index); void DetermineAndSetIsEligibleForTieredCompilation();
void EnsureTemporaryEntryPointsCreated(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker)
{
CONTRACTL
{
THROWS;
GC_NOTRIGGER;
MODE_ANY;
}
CONTRACTL_END;
if (GetTemporaryEntryPoints() == (TADDR)0)
CreateTemporaryEntryPoints(pLoaderAllocator, pamTracker);
}
void CreateTemporaryEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker);
#ifdef HAS_COMPACT_ENTRYPOINTS
//
// There two implementation options for temporary entrypoints:
//
// (1) Compact entrypoints. They provide as dense entrypoints as possible, but can't be patched
// to point to the final code. The call to unjitted method is indirect call via slot.
//
// (2) Precodes. The precode will be patched to point to the final code eventually, thus
// the temporary entrypoint can be embedded in the code. The call to unjitted method is
// direct call to direct jump.
//
// We use (1) for x86 and (2) for 64-bit to get the best performance on each platform.
// For ARM (1) is used.
TADDR AllocateCompactEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker);
static MethodDesc* GetMethodDescFromCompactEntryPoint(PCODE addr, BOOL fSpeculative = FALSE);
static SIZE_T SizeOfCompactEntryPoints(int count);
static BOOL IsCompactEntryPointAtAddress(PCODE addr);
#ifdef TARGET_ARM
static int GetCompactEntryPointMaxCount ();
#endif // TARGET_ARM
#endif // HAS_COMPACT_ENTRYPOINTS
FORCEINLINE PTR_MethodTable GetMethodTable() FORCEINLINE PTR_MethodTable GetMethodTable()
{ {
@ -2240,17 +2281,6 @@ public:
return m_count + 1; return m_count + 1;
} }
inline BOOL HasCompactEntryPoints()
{
LIMITED_METHOD_DAC_CONTRACT;
#ifdef HAS_COMPACT_ENTRYPOINTS
return (m_flagsAndTokenRange & enum_flag_HasCompactEntrypoints) != 0;
#else
return FALSE;
#endif
}
inline UINT16 GetTokRange() inline UINT16 GetTokRange()
{ {
LIMITED_METHOD_DAC_CONTRACT; LIMITED_METHOD_DAC_CONTRACT;
@ -2277,12 +2307,6 @@ public:
#endif #endif
private: private:
void SetHasCompactEntryPoints()
{
LIMITED_METHOD_CONTRACT;
m_flagsAndTokenRange |= enum_flag_HasCompactEntrypoints;
}
void SetTokenRange(UINT16 tokenRange) void SetTokenRange(UINT16 tokenRange)
{ {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_CONTRACT;

View file

@ -6,6 +6,28 @@
#ifndef _METHOD_INL_ #ifndef _METHOD_INL_
#define _METHOD_INL_ #define _METHOD_INL_
inline bool MethodDesc::IsEligibleForTieredCompilation()
{
LIMITED_METHOD_DAC_CONTRACT;
#ifdef FEATURE_TIERED_COMPILATION
_ASSERTE(GetMethodDescChunk()->DeterminedIfMethodsAreEligibleForTieredCompilation());
#endif
return IsEligibleForTieredCompilation_NoCheckMethodDescChunk();
}
inline bool MethodDesc::IsEligibleForTieredCompilation_NoCheckMethodDescChunk()
{
LIMITED_METHOD_DAC_CONTRACT;
// Just like above, but without the assert. This is used in the path which initializes the flag.
#ifdef FEATURE_TIERED_COMPILATION
return (VolatileLoadWithoutBarrier(&m_wFlags3AndTokenRemainder) & enum_flag3_IsEligibleForTieredCompilation) != 0;
#else
return false;
#endif
}
inline InstantiatedMethodDesc* MethodDesc::AsInstantiatedMethodDesc() const inline InstantiatedMethodDesc* MethodDesc::AsInstantiatedMethodDesc() const
{ {
WRAPPER_NO_CONTRACT; WRAPPER_NO_CONTRACT;

View file

@ -90,62 +90,12 @@ PTR_MethodDesc MethodImpl::GetMethodDesc(DWORD slotIndex, PTR_MethodDesc default
TADDR base = dac_cast<TADDR>(pRelPtrForSlot) + slotIndex * sizeof(MethodDesc *); TADDR base = dac_cast<TADDR>(pRelPtrForSlot) + slotIndex * sizeof(MethodDesc *);
PTR_MethodDesc result = *dac_cast<DPTR(PTR_MethodDesc)>(base); PTR_MethodDesc result = *dac_cast<DPTR(PTR_MethodDesc)>(base);
// Prejitted images may leave NULL in this table if
// the methoddesc is declared in another module.
// In this case we need to manually compute & restore it
// from the slot number.
if (result == NULL)
#ifndef DACCESS_COMPILE
result = RestoreSlot(slotIndex, defaultReturn->GetMethodTable());
#else // DACCESS_COMPILE
DacNotImpl();
#endif // DACCESS_COMPILE
return result;
}
#ifndef DACCESS_COMPILE
MethodDesc *MethodImpl::RestoreSlot(DWORD index, MethodTable *pMT)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
FORBID_FAULT;
PRECONDITION(pdwSlots != NULL);
}
CONTRACTL_END
MethodDesc *result;
PREFIX_ASSUME(pdwSlots != NULL);
DWORD slot = GetSlots()[index];
// Since the overridden method is in a different module, we
// are guaranteed that it is from a different class. It is
// either an override of a parent virtual method or parent-implemented
// interface, or of an interface that this class has introduced.
// In the former 2 cases, the slot number will be in the parent's
// vtable section, and we can retrieve the implemented MethodDesc from
// there. In the latter case, we can search through our interface
// map to determine which interface it is from.
MethodTable *pParentMT = pMT->GetParentMethodTable();
CONSISTENCY_CHECK(pParentMT != NULL && slot < pParentMT->GetNumVirtuals());
{
result = pParentMT->GetMethodDescForSlot(slot);
}
_ASSERTE(result != NULL); _ASSERTE(result != NULL);
pImplementedMD[index] = result;
return result; return result;
} }
#ifndef DACCESS_COMPILE
/////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////
void MethodImpl::SetSize(LoaderHeap *pHeap, AllocMemTracker *pamTracker, DWORD size) void MethodImpl::SetSize(LoaderHeap *pHeap, AllocMemTracker *pamTracker, DWORD size)
{ {

View file

@ -748,6 +748,7 @@ MethodTable* CreateMinimalMethodTable(Module* pContainingModule,
#ifdef _DEBUG #ifdef _DEBUG
pClass->SetDebugClassName("dynamicClass"); pClass->SetDebugClassName("dynamicClass");
pMT->SetDebugClassName("dynamicClass"); pMT->SetDebugClassName("dynamicClass");
pMT->GetAuxiliaryDataForWrite()->SetIsPublished();
#endif #endif
LOG((LF_BCL, LL_INFO10, "Level1 - MethodTable created {0x%p}\n", pClass)); LOG((LF_BCL, LL_INFO10, "Level1 - MethodTable created {0x%p}\n", pClass));
@ -1644,7 +1645,7 @@ MethodTable::DebugDumpVtable(LPCUTF8 szClassName, BOOL fDebug)
name, name,
pszName, pszName,
IsMdFinal(dwAttrs) ? " (final)" : "", IsMdFinal(dwAttrs) ? " (final)" : "",
(VOID *)pMD->GetMethodEntryPoint(), (VOID *)pMD->GetMethodEntryPointIfExists(),
pMD->GetSlot() pMD->GetSlot()
); );
OutputDebugStringUtf8(buff); OutputDebugStringUtf8(buff);
@ -1658,7 +1659,7 @@ MethodTable::DebugDumpVtable(LPCUTF8 szClassName, BOOL fDebug)
pMD->GetClass()->GetDebugClassName(), pMD->GetClass()->GetDebugClassName(),
pszName, pszName,
IsMdFinal(dwAttrs) ? " (final)" : "", IsMdFinal(dwAttrs) ? " (final)" : "",
(VOID *)pMD->GetMethodEntryPoint(), (VOID *)pMD->GetMethodEntryPointIfExists(),
pMD->GetSlot() pMD->GetSlot()
)); ));
} }
@ -1771,9 +1772,9 @@ MethodTable::Debug_DumpDispatchMap()
nInterfaceIndex, nInterfaceIndex,
pInterface->GetDebugClassName(), pInterface->GetDebugClassName(),
nInterfaceSlotNumber, nInterfaceSlotNumber,
pInterface->GetMethodDescForSlot(nInterfaceSlotNumber)->GetName(), pInterface->GetMethodDescForSlot_NoThrow(nInterfaceSlotNumber)->GetName(),
nImplementationSlotNumber, nImplementationSlotNumber,
GetMethodDescForSlot(nImplementationSlotNumber)->GetName())); GetMethodDescForSlot_NoThrow(nImplementationSlotNumber)->GetName()));
it.Next(); it.Next();
} }
@ -3448,7 +3449,7 @@ BOOL MethodTable::RunClassInitEx(OBJECTREF *pThrowable)
MethodTable * pCanonMT = GetCanonicalMethodTable(); MethodTable * pCanonMT = GetCanonicalMethodTable();
// Call the code method without touching MethodDesc if possible // Call the code method without touching MethodDesc if possible
PCODE pCctorCode = pCanonMT->GetSlot(pCanonMT->GetClassConstructorSlot()); PCODE pCctorCode = pCanonMT->GetRestoredSlot(pCanonMT->GetClassConstructorSlot());
if (pCanonMT->IsSharedByGenericInstantiations()) if (pCanonMT->IsSharedByGenericInstantiations())
{ {
@ -6274,19 +6275,6 @@ void MethodTable::SetCl(mdTypeDef token)
_ASSERTE(GetCl() == token); _ASSERTE(GetCl() == token);
} }
//==========================================================================================
MethodDesc * MethodTable::GetClassConstructor()
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
MODE_ANY;
}
CONTRACTL_END;
return GetMethodDescForSlot(GetClassConstructorSlot());
}
//========================================================================================== //==========================================================================================
DWORD MethodTable::HasFixedAddressVTStatics() DWORD MethodTable::HasFixedAddressVTStatics()
{ {
@ -6475,6 +6463,8 @@ InteropMethodTableData *MethodTable::GetComInteropData()
GC_TRIGGERS; GC_TRIGGERS;
} CONTRACTL_END; } CONTRACTL_END;
_ASSERTE(GetAuxiliaryData()->IsPublished());
InteropMethodTableData *pData = LookupComInteropData(); InteropMethodTableData *pData = LookupComInteropData();
if (!pData) if (!pData)
@ -6753,13 +6743,13 @@ MethodDesc *MethodTable::MethodDataObject::GetImplMethodDesc(UINT32 slotNumber)
if (pMDRet == NULL) if (pMDRet == NULL)
{ {
_ASSERTE(slotNumber < GetNumVirtuals()); _ASSERTE(slotNumber < GetNumVirtuals());
pMDRet = m_pDeclMT->GetMethodDescForSlot(slotNumber); pMDRet = m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber);
_ASSERTE(CheckPointer(pMDRet)); _ASSERTE(CheckPointer(pMDRet));
pEntry->SetImplMethodDesc(pMDRet); pEntry->SetImplMethodDesc(pMDRet);
} }
else else
{ {
_ASSERTE(slotNumber >= GetNumVirtuals() || pMDRet == m_pDeclMT->GetMethodDescForSlot(slotNumber)); _ASSERTE(slotNumber >= GetNumVirtuals() || pMDRet == m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber));
} }
return pMDRet; return pMDRet;
@ -6795,7 +6785,7 @@ void MethodTable::MethodDataObject::InvalidateCachedVirtualSlot(UINT32 slotNumbe
MethodDesc *MethodTable::MethodDataInterface::GetDeclMethodDesc(UINT32 slotNumber) MethodDesc *MethodTable::MethodDataInterface::GetDeclMethodDesc(UINT32 slotNumber)
{ {
WRAPPER_NO_CONTRACT; WRAPPER_NO_CONTRACT;
return m_pDeclMT->GetMethodDescForSlot(slotNumber); return m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber);
} }
//========================================================================================== //==========================================================================================
@ -6972,6 +6962,14 @@ DispatchSlot MethodTable::MethodDataInterfaceImpl::GetImplSlot(UINT32 slotNumber
return m_pImpl->GetImplSlot(implSlotNumber); return m_pImpl->GetImplSlot(implSlotNumber);
} }
//==========================================================================================
bool MethodTable::MethodDataInterfaceImpl::IsImplSlotNull(UINT32 slotNumber)
{
WRAPPER_NO_CONTRACT;
UINT32 implSlotNumber = MapToImplSlotNumber(slotNumber);
return (implSlotNumber == INVALID_SLOT_NUMBER);
}
//========================================================================================== //==========================================================================================
UINT32 MethodTable::MethodDataInterfaceImpl::GetImplSlotNumber(UINT32 slotNumber) UINT32 MethodTable::MethodDataInterfaceImpl::GetImplSlotNumber(UINT32 slotNumber)
{ {
@ -7625,18 +7623,37 @@ Module *MethodTable::GetDefiningModuleForOpenType()
PCODE MethodTable::GetRestoredSlot(DWORD slotNumber) PCODE MethodTable::GetRestoredSlot(DWORD slotNumber)
{ {
CONTRACTL { CONTRACTL {
NOTHROW; THROWS;
GC_NOTRIGGER; GC_NOTRIGGER;
MODE_ANY; MODE_ANY;
SUPPORTS_DAC; SUPPORTS_DAC;
} CONTRACTL_END; } CONTRACTL_END;
// Since this can allocate memory that won't be freed until the LoaderAllocator is release, we need
// to make sure that the associated MethodTable is fully allocated and permanent.
_ASSERTE(GetAuxiliaryData()->IsPublished());
// //
// Keep in sync with code:MethodTable::GetRestoredSlotMT // Keep in sync with code:MethodTable::GetRestoredSlotMT
// //
PCODE slot = GetCanonicalMethodTable()->GetSlot(slotNumber); PCODE slot = GetCanonicalMethodTable()->GetSlot(slotNumber);
#ifndef DACCESS_COMPILE
if (slot == (PCODE)NULL)
{
// This is a slot that has not been filled in yet. This can happen if we are
// looking at a slot which has not yet been given a temporary entry point.
MethodDesc *pMD = GetCanonicalMethodTable()->GetMethodDescForSlot_NoThrow(slotNumber);
PCODE temporaryEntryPoint = pMD->GetTemporaryEntryPoint();
slot = GetCanonicalMethodTable()->GetSlot(slotNumber);
if (slot == (PCODE)NULL)
{
InterlockedCompareExchangeT(GetCanonicalMethodTable()->GetSlotPtrRaw(slotNumber), temporaryEntryPoint, (PCODE)NULL);
slot = GetCanonicalMethodTable()->GetSlot(slotNumber);
}
}
_ASSERTE(slot != (PCODE)NULL); _ASSERTE(slot != (PCODE)NULL);
#endif // DACCESS_COMPILE
return slot; return slot;
} }
@ -7712,7 +7729,7 @@ MethodDesc* MethodTable::GetParallelMethodDesc(MethodDesc* pDefMD)
return GetParallelMethodDescForEnC(this, pDefMD); return GetParallelMethodDescForEnC(this, pDefMD);
#endif // FEATURE_METADATA_UPDATER #endif // FEATURE_METADATA_UPDATER
return GetMethodDescForSlot(pDefMD->GetSlot()); return GetMethodDescForSlot_NoThrow(pDefMD->GetSlot()); // TODO! We should probably use the throwing variant where possible
} }
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
@ -7785,7 +7802,7 @@ BOOL MethodTable::HasExplicitOrImplicitPublicDefaultConstructor()
return FALSE; return FALSE;
} }
MethodDesc * pCanonMD = GetMethodDescForSlot(GetDefaultConstructorSlot()); MethodDesc * pCanonMD = GetMethodDescForSlot_NoThrow(GetDefaultConstructorSlot());
return pCanonMD != NULL && pCanonMD->IsPublic(); return pCanonMD != NULL && pCanonMD->IsPublic();
} }

View file

@ -332,7 +332,11 @@ struct MethodTableAuxiliaryData
enum_flag_CanCompareBitsOrUseFastGetHashCode = 0x0004, // Is any field type or sub field type overridden Equals or GetHashCode enum_flag_CanCompareBitsOrUseFastGetHashCode = 0x0004, // Is any field type or sub field type overridden Equals or GetHashCode
enum_flag_HasApproxParent = 0x0010, enum_flag_HasApproxParent = 0x0010,
// enum_unused = 0x0020, #ifdef _DEBUG
// The MethodTable is in the right state to be published, and will be inevitably.
// Currently DEBUG only as it does not affect behavior in any way in a release build
enum_flag_IsPublished = 0x0020,
#endif
enum_flag_IsNotFullyLoaded = 0x0040, enum_flag_IsNotFullyLoaded = 0x0040,
enum_flag_DependenciesLoaded = 0x0080, // class and all dependencies loaded up to CLASS_LOADED_BUT_NOT_VERIFIED enum_flag_DependenciesLoaded = 0x0080, // class and all dependencies loaded up to CLASS_LOADED_BUT_NOT_VERIFIED
@ -506,6 +510,25 @@ public:
} }
#ifdef _DEBUG
#ifndef DACCESS_COMPILE
// Used in DEBUG builds to indicate that the MethodTable is in the right state to be published, and will be inevitably.
void SetIsPublished()
{
LIMITED_METHOD_CONTRACT;
m_dwFlags |= (MethodTableAuxiliaryData::enum_flag_IsPublished);
}
#endif
// The MethodTable is in the right state to be published, and will be inevitably.
// Currently DEBUG only as it does not affect behavior in any way in a release build
bool IsPublished() const
{
LIMITED_METHOD_CONTRACT;
return (VolatileLoad(&m_dwFlags) & enum_flag_IsPublished);
}
#endif // _DEBUG
// The NonVirtualSlots array grows backwards, so this pointer points at just AFTER the first entry in the array // The NonVirtualSlots array grows backwards, so this pointer points at just AFTER the first entry in the array
// To access, use a construct like... GetNonVirtualSlotsArray(pAuxiliaryData)[-(1 + index)] // To access, use a construct like... GetNonVirtualSlotsArray(pAuxiliaryData)[-(1 + index)]
static inline PTR_PCODE GetNonVirtualSlotsArray(PTR_Const_MethodTableAuxiliaryData pAuxiliaryData) static inline PTR_PCODE GetNonVirtualSlotsArray(PTR_Const_MethodTableAuxiliaryData pAuxiliaryData)
@ -1129,8 +1152,6 @@ public:
// THE CLASS CONSTRUCTOR // THE CLASS CONSTRUCTOR
// //
MethodDesc * GetClassConstructor();
BOOL HasClassConstructor(); BOOL HasClassConstructor();
void SetHasClassConstructor(); void SetHasClassConstructor();
WORD GetClassConstructorSlot(); WORD GetClassConstructorSlot();
@ -1650,8 +1671,16 @@ public:
// Slots <-> the MethodDesc associated with the slot. // Slots <-> the MethodDesc associated with the slot.
// //
// Get the MethodDesc that implements a given slot
// NOTE: Since this may fill in the slot with a temporary entrypoint if that hasn't happened
// yet, when writing asserts, GetMethodDescForSlot_NoThrow should be used to avoid
// the presence of an assert hiding bugs.
MethodDesc* GetMethodDescForSlot(DWORD slot); MethodDesc* GetMethodDescForSlot(DWORD slot);
// This api produces the same result as GetMethodDescForSlot, but it uses a variation on the
// algorithm that does not allocate a temporary entrypoint for the slot if it doesn't exist.
MethodDesc* GetMethodDescForSlot_NoThrow(DWORD slot);
static MethodDesc* GetMethodDescForSlotAddress(PCODE addr, BOOL fSpeculative = FALSE); static MethodDesc* GetMethodDescForSlotAddress(PCODE addr, BOOL fSpeculative = FALSE);
PCODE GetRestoredSlot(DWORD slot); PCODE GetRestoredSlot(DWORD slot);
@ -3015,6 +3044,7 @@ public:
virtual MethodData *GetImplMethodData() = 0; virtual MethodData *GetImplMethodData() = 0;
MethodTable *GetImplMethodTable() { return m_pImplMT; } MethodTable *GetImplMethodTable() { return m_pImplMT; }
virtual DispatchSlot GetImplSlot(UINT32 slotNumber) = 0; virtual DispatchSlot GetImplSlot(UINT32 slotNumber) = 0;
virtual bool IsImplSlotNull(UINT32 slotNumber) = 0;
// Returns INVALID_SLOT_NUMBER if no implementation exists. // Returns INVALID_SLOT_NUMBER if no implementation exists.
virtual UINT32 GetImplSlotNumber(UINT32 slotNumber) = 0; virtual UINT32 GetImplSlotNumber(UINT32 slotNumber) = 0;
virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber) = 0; virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber) = 0;
@ -3127,6 +3157,7 @@ protected:
virtual MethodData *GetImplMethodData() virtual MethodData *GetImplMethodData()
{ LIMITED_METHOD_CONTRACT; return this; } { LIMITED_METHOD_CONTRACT; return this; }
virtual DispatchSlot GetImplSlot(UINT32 slotNumber); virtual DispatchSlot GetImplSlot(UINT32 slotNumber);
virtual bool IsImplSlotNull(UINT32 slotNumber) { LIMITED_METHOD_CONTRACT; return false; } // Every valid slot on an actual MethodTable has a MethodDesc which is associated with it
virtual UINT32 GetImplSlotNumber(UINT32 slotNumber); virtual UINT32 GetImplSlotNumber(UINT32 slotNumber);
virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber);
virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber); virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber);
@ -3267,6 +3298,12 @@ protected:
{ LIMITED_METHOD_CONTRACT; return this; } { LIMITED_METHOD_CONTRACT; return this; }
virtual DispatchSlot GetImplSlot(UINT32 slotNumber) virtual DispatchSlot GetImplSlot(UINT32 slotNumber)
{ WRAPPER_NO_CONTRACT; return DispatchSlot(m_pDeclMT->GetRestoredSlot(slotNumber)); } { WRAPPER_NO_CONTRACT; return DispatchSlot(m_pDeclMT->GetRestoredSlot(slotNumber)); }
virtual bool IsImplSlotNull(UINT32 slotNumber)
{
// Every valid slot on an actual MethodTable has a MethodDesc which is associated with it
LIMITED_METHOD_CONTRACT;
return false;
}
virtual UINT32 GetImplSlotNumber(UINT32 slotNumber) virtual UINT32 GetImplSlotNumber(UINT32 slotNumber)
{ LIMITED_METHOD_CONTRACT; return slotNumber; } { LIMITED_METHOD_CONTRACT; return slotNumber; }
virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber);
@ -3313,6 +3350,7 @@ protected:
virtual MethodTable *GetImplMethodTable() virtual MethodTable *GetImplMethodTable()
{ WRAPPER_NO_CONTRACT; return m_pImpl->GetImplMethodTable(); } { WRAPPER_NO_CONTRACT; return m_pImpl->GetImplMethodTable(); }
virtual DispatchSlot GetImplSlot(UINT32 slotNumber); virtual DispatchSlot GetImplSlot(UINT32 slotNumber);
virtual bool IsImplSlotNull(UINT32 slotNumber);
virtual UINT32 GetImplSlotNumber(UINT32 slotNumber); virtual UINT32 GetImplSlotNumber(UINT32 slotNumber);
virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber);
virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber); virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber);
@ -3437,6 +3475,7 @@ public:
inline BOOL IsVirtual() const; inline BOOL IsVirtual() const;
inline UINT32 GetNumVirtuals() const; inline UINT32 GetNumVirtuals() const;
inline DispatchSlot GetTarget() const; inline DispatchSlot GetTarget() const;
inline bool IsTargetNull() const;
// Can be called only if IsValid()=TRUE // Can be called only if IsValid()=TRUE
inline MethodDesc *GetMethodDesc() const; inline MethodDesc *GetMethodDesc() const;

View file

@ -408,7 +408,7 @@ inline MethodDesc* MethodTable::GetMethodDescForSlot(DWORD slot)
{ {
CONTRACTL CONTRACTL
{ {
NOTHROW; THROWS;
GC_NOTRIGGER; GC_NOTRIGGER;
MODE_ANY; MODE_ANY;
} }
@ -426,6 +426,49 @@ inline MethodDesc* MethodTable::GetMethodDescForSlot(DWORD slot)
return MethodTable::GetMethodDescForSlotAddress(pCode); return MethodTable::GetMethodDescForSlotAddress(pCode);
} }
//==========================================================================================
inline MethodDesc* MethodTable::GetMethodDescForSlot_NoThrow(DWORD slot)
{
CONTRACTL
{
NOTHROW;
GC_NOTRIGGER;
MODE_ANY;
}
CONTRACTL_END;
PCODE pCode = GetCanonicalMethodTable()->GetSlot(slot);
if (pCode == (PCODE)NULL)
{
// This code path should only be hit for methods which have not been overriden
MethodTable *pMTToSearchForMethodDesc = this->GetCanonicalMethodTable();
while (pMTToSearchForMethodDesc != NULL)
{
IntroducedMethodIterator it(pMTToSearchForMethodDesc);
for (; it.IsValid(); it.Next())
{
if (it.GetMethodDesc()->GetSlot() == slot)
{
return it.GetMethodDesc();
}
}
pMTToSearchForMethodDesc = pMTToSearchForMethodDesc->GetParentMethodTable()->GetCanonicalMethodTable();
}
_ASSERTE(!"We should never reach here, as there should always be a MethodDesc for a slot");
}
// This is an optimization that we can take advantage of if we're trying to get the MethodDesc
// for an interface virtual, since their slots point to stub.
if (IsInterface() && slot < GetNumVirtuals())
{
return MethodDesc::GetMethodDescFromStubAddr(pCode);
}
return MethodTable::GetMethodDescForSlotAddress(pCode);
}
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
//========================================================================================== //==========================================================================================
@ -435,8 +478,8 @@ inline void MethodTable::CopySlotFrom(UINT32 slotNumber, MethodDataWrapper &hSou
MethodDesc *pMD = hSourceMTData->GetImplMethodDesc(slotNumber); MethodDesc *pMD = hSourceMTData->GetImplMethodDesc(slotNumber);
_ASSERTE(CheckPointer(pMD)); _ASSERTE(CheckPointer(pMD));
_ASSERTE(pMD == pSourceMT->GetMethodDescForSlot(slotNumber)); _ASSERTE(pMD == pSourceMT->GetMethodDescForSlot_NoThrow(slotNumber));
SetSlot(slotNumber, pMD->GetInitialEntryPointForCopiedSlot()); SetSlot(slotNumber, pMD->GetInitialEntryPointForCopiedSlot(NULL, NULL));
} }
//========================================================================================== //==========================================================================================
@ -544,6 +587,12 @@ inline DispatchSlot MethodTable::MethodIterator::GetTarget() const {
return m_pMethodData->GetImplSlot(m_iCur); return m_pMethodData->GetImplSlot(m_iCur);
} }
inline bool MethodTable::MethodIterator::IsTargetNull() const {
LIMITED_METHOD_CONTRACT;
CONSISTENCY_CHECK(IsValid());
return m_pMethodData->IsImplSlotNull(m_iCur);
}
//========================================================================================== //==========================================================================================
inline MethodDesc *MethodTable::MethodIterator::GetMethodDesc() const { inline MethodDesc *MethodTable::MethodIterator::GetMethodDesc() const {
LIMITED_METHOD_CONTRACT; LIMITED_METHOD_CONTRACT;

View file

@ -6848,7 +6848,7 @@ VOID MethodTableBuilder::ValidateInterfaceMethodConstraints()
// Grab the method token // Grab the method token
MethodTable * pMTItf = pItf->GetMethodTable(); MethodTable * pMTItf = pItf->GetMethodTable();
CONSISTENCY_CHECK(CheckPointer(pMTItf->GetMethodDescForSlot(it.GetSlotNumber()))); CONSISTENCY_CHECK(CheckPointer(pMTItf->GetMethodDescForSlot_NoThrow(it.GetSlotNumber())));
mdMethodDef mdTok = pItf->GetMethodTable()->GetMethodDescForSlot(it.GetSlotNumber())->GetMemberDef(); mdMethodDef mdTok = pItf->GetMethodTable()->GetMethodDescForSlot(it.GetSlotNumber())->GetMemberDef();
// Default to the current module. The code immediately below determines if this // Default to the current module. The code immediately below determines if this
@ -6935,9 +6935,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescs()
SIZE_T sizeOfMethodDescs = 0; // current running size of methodDesc chunk SIZE_T sizeOfMethodDescs = 0; // current running size of methodDesc chunk
int startIndex = 0; // start of the current chunk (index into bmtMethod array) int startIndex = 0; // start of the current chunk (index into bmtMethod array)
// Limit the maximum MethodDescs per chunk by the number of precodes that can fit to a single memory page,
// since we allocate consecutive temporary entry points for all MethodDescs in the whole chunk.
DWORD maxPrecodesPerPage = Precode::GetMaxTemporaryEntryPointsCount();
DWORD methodDescCount = 0; DWORD methodDescCount = 0;
DeclaredMethodIterator it(*this); DeclaredMethodIterator it(*this);
@ -6978,8 +6975,7 @@ VOID MethodTableBuilder::AllocAndInitMethodDescs()
} }
if (tokenRange != currentTokenRange || if (tokenRange != currentTokenRange ||
sizeOfMethodDescs + size > MethodDescChunk::MaxSizeOfMethodDescs || sizeOfMethodDescs + size > MethodDescChunk::MaxSizeOfMethodDescs)
methodDescCount + currentSlotMethodDescCount > maxPrecodesPerPage)
{ {
if (sizeOfMethodDescs != 0) if (sizeOfMethodDescs != 0)
{ {
@ -7021,10 +7017,10 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T
PTR_LoaderHeap pHeap = GetLoaderAllocator()->GetHighFrequencyHeap(); PTR_LoaderHeap pHeap = GetLoaderAllocator()->GetHighFrequencyHeap();
void * pMem = GetMemTracker()->Track( void * pMem = GetMemTracker()->Track(
pHeap->AllocMem(S_SIZE_T(sizeof(TADDR) + sizeof(MethodDescChunk) + sizeOfMethodDescs))); pHeap->AllocMem(S_SIZE_T(sizeof(MethodDescChunk) + sizeOfMethodDescs)));
// Skip pointer to temporary entrypoints // Skip pointer to temporary entrypoints
MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem + sizeof(TADDR)); MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem);
COUNT_T methodDescCount = 0; COUNT_T methodDescCount = 0;
@ -7045,8 +7041,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T
MethodDesc * pMD = (MethodDesc *)((BYTE *)pChunk + offset); MethodDesc * pMD = (MethodDesc *)((BYTE *)pChunk + offset);
pMD->SetChunkIndex(pChunk); pMD->SetChunkIndex(pChunk);
pMD->SetMethodDescIndex(methodDescCount);
InitNewMethodDesc(pMDMethod, pMD); InitNewMethodDesc(pMDMethod, pMD);
#ifdef _PREFAST_ #ifdef _PREFAST_
@ -7089,7 +7083,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T
// Reset the chunk index // Reset the chunk index
pUnboxedMD->SetChunkIndex(pChunk); pUnboxedMD->SetChunkIndex(pChunk);
pUnboxedMD->SetMethodDescIndex(methodDescCount);
if (bmtGenerics->GetNumGenericArgs() == 0) { if (bmtGenerics->GetNumGenericArgs() == 0) {
pUnboxedMD->SetHasNonVtableSlot(); pUnboxedMD->SetHasNonVtableSlot();
@ -9232,7 +9225,7 @@ void MethodTableBuilder::CopyExactParentSlots(MethodTable *pMT)
// fix up wrongly-inherited method descriptors // fix up wrongly-inherited method descriptors
MethodDesc* pMD = hMTData->GetImplMethodDesc(i); MethodDesc* pMD = hMTData->GetImplMethodDesc(i);
CONSISTENCY_CHECK(CheckPointer(pMD)); CONSISTENCY_CHECK(CheckPointer(pMD));
CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot(i)); CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot_NoThrow(i));
if (pMD->GetMethodTable() == pMT) if (pMD->GetMethodTable() == pMT)
continue; continue;
@ -10821,9 +10814,8 @@ MethodTableBuilder::SetupMethodTable2(
{ {
for (MethodDescChunk *pChunk = GetHalfBakedClass()->GetChunks(); pChunk != NULL; pChunk = pChunk->GetNextChunk()) for (MethodDescChunk *pChunk = GetHalfBakedClass()->GetChunks(); pChunk != NULL; pChunk = pChunk->GetNextChunk())
{ {
// Make sure that temporary entrypoints are create for methods. NGEN uses temporary // Make sure that eligibility for versionability is computed
// entrypoints as surrogate keys for precodes. pChunk->DetermineAndSetIsEligibleForTieredCompilation();
pChunk->EnsureTemporaryEntryPointsCreated(GetLoaderAllocator(), GetMemTracker());
} }
} }
@ -10863,7 +10855,7 @@ MethodTableBuilder::SetupMethodTable2(
// //
DWORD indirectionIndex = MethodTable::GetIndexOfVtableIndirection(iCurSlot); DWORD indirectionIndex = MethodTable::GetIndexOfVtableIndirection(iCurSlot);
if (GetParentMethodTable()->GetVtableIndirections()[indirectionIndex] != pMT->GetVtableIndirections()[indirectionIndex]) if (GetParentMethodTable()->GetVtableIndirections()[indirectionIndex] != pMT->GetVtableIndirections()[indirectionIndex])
pMT->SetSlot(iCurSlot, pMD->GetInitialEntryPointForCopiedSlot()); pMT->SetSlot(iCurSlot, pMD->GetInitialEntryPointForCopiedSlot(pMT, GetMemTracker()));
} }
else else
{ {
@ -10872,7 +10864,13 @@ MethodTableBuilder::SetupMethodTable2(
// //
_ASSERTE(iCurSlot >= bmtVT->cVirtualSlots || ChangesImplementationOfVirtualSlot(iCurSlot)); _ASSERTE(iCurSlot >= bmtVT->cVirtualSlots || ChangesImplementationOfVirtualSlot(iCurSlot));
PCODE addr = pMD->GetTemporaryEntryPoint(); if ((pMD->GetSlot() == iCurSlot) && (GetParentMethodTable() == NULL || iCurSlot >= GetParentMethodTable()->GetNumVirtuals()))
continue; // For cases where the method is defining the method desc slot, we don't need to fill it in yet
pMD->EnsureTemporaryEntryPointCore(GetMemTracker());
// Use the IfExists variant, as GetTemporaryEntrypoint isn't safe to call during MethodTable construction, as it might allocate
// without using the MemTracker.
PCODE addr = pMD->GetTemporaryEntryPointIfExists();
_ASSERTE(addr != (PCODE)NULL); _ASSERTE(addr != (PCODE)NULL);
if (pMD->HasNonVtableSlot()) if (pMD->HasNonVtableSlot())
@ -10888,7 +10886,7 @@ MethodTableBuilder::SetupMethodTable2(
{ {
// The rest of the system assumes that certain methods always have stable entrypoints. // The rest of the system assumes that certain methods always have stable entrypoints.
// Create them now. // Create them now.
pMD->GetOrCreatePrecode(); pMD->MarkPrecodeAsStableEntrypoint();
} }
} }
} }
@ -10937,7 +10935,7 @@ MethodTableBuilder::SetupMethodTable2(
MethodDesc* pMD = hMTData->GetImplMethodDesc(i); MethodDesc* pMD = hMTData->GetImplMethodDesc(i);
CONSISTENCY_CHECK(CheckPointer(pMD)); CONSISTENCY_CHECK(CheckPointer(pMD));
CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot(i)); CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot_NoThrow(i));
// This indicates that the method body in this slot was copied here through a methodImpl. // This indicates that the method body in this slot was copied here through a methodImpl.
// Thus, copy the value of the slot from which the body originally came, in case it was // Thus, copy the value of the slot from which the body originally came, in case it was
@ -10947,11 +10945,11 @@ MethodTableBuilder::SetupMethodTable2(
{ {
MethodDesc *pOriginalMD = hMTData->GetImplMethodDesc(originalIndex); MethodDesc *pOriginalMD = hMTData->GetImplMethodDesc(originalIndex);
CONSISTENCY_CHECK(CheckPointer(pOriginalMD)); CONSISTENCY_CHECK(CheckPointer(pOriginalMD));
CONSISTENCY_CHECK(pOriginalMD == pMT->GetMethodDescForSlot(originalIndex)); CONSISTENCY_CHECK(pOriginalMD == pMT->GetMethodDescForSlot_NoThrow(originalIndex));
if (pMD != pOriginalMD) if (pMD != pOriginalMD)
{ {
// Copy the slot value in the method's original slot. // Copy the slot value in the method's original slot.
pMT->SetSlot(i, pOriginalMD->GetInitialEntryPointForCopiedSlot()); pMT->SetSlot(i, pOriginalMD->GetInitialEntryPointForCopiedSlot(pMT, GetMemTracker()));
hMTData->InvalidateCachedVirtualSlot(i); hMTData->InvalidateCachedVirtualSlot(i);
// Update the pMD to the new method desc we just copied over ourselves with. This will // Update the pMD to the new method desc we just copied over ourselves with. This will
@ -11008,8 +11006,7 @@ MethodTableBuilder::SetupMethodTable2(
// If we fail to find an _IMPLEMENTATION_ for the interface MD, then // If we fail to find an _IMPLEMENTATION_ for the interface MD, then
// we are a ComImportMethod, otherwise we still be a ComImportMethod or // we are a ComImportMethod, otherwise we still be a ComImportMethod or
// we can be a ManagedMethod. // we can be a ManagedMethod.
DispatchSlot impl(it.GetTarget()); if (!it.IsTargetNull())
if (!impl.IsNull())
{ {
pClsMD = it.GetMethodDesc(); pClsMD = it.GetMethodDesc();
@ -11250,7 +11247,7 @@ void MethodTableBuilder::VerifyVirtualMethodsImplemented(MethodTable::MethodData
MethodTable::MethodIterator it(hData); MethodTable::MethodIterator it(hData);
for (; it.IsValid() && it.IsVirtual(); it.Next()) for (; it.IsValid() && it.IsVirtual(); it.Next())
{ {
if (it.GetTarget().IsNull()) if (it.IsTargetNull())
{ {
MethodDesc *pMD = it.GetDeclMethodDesc(); MethodDesc *pMD = it.GetDeclMethodDesc();

View file

@ -199,32 +199,6 @@ PCODE Precode::TryToSkipFixupPrecode(PCODE addr)
return 0; return 0;
} }
Precode* Precode::GetPrecodeForTemporaryEntryPoint(TADDR temporaryEntryPoints, int index)
{
WRAPPER_NO_CONTRACT;
PrecodeType t = PTR_Precode(temporaryEntryPoints)->GetType();
SIZE_T oneSize = SizeOfTemporaryEntryPoint(t);
return PTR_Precode(temporaryEntryPoints + index * oneSize);
}
SIZE_T Precode::SizeOfTemporaryEntryPoints(PrecodeType t, int count)
{
WRAPPER_NO_CONTRACT;
SUPPORTS_DAC;
SIZE_T oneSize = SizeOfTemporaryEntryPoint(t);
return count * oneSize;
}
SIZE_T Precode::SizeOfTemporaryEntryPoints(TADDR temporaryEntryPoints, int count)
{
WRAPPER_NO_CONTRACT;
SUPPORTS_DAC;
PrecodeType precodeType = PTR_Precode(temporaryEntryPoints)->GetType();
return SizeOfTemporaryEntryPoints(precodeType, count);
}
#ifndef DACCESS_COMPILE #ifndef DACCESS_COMPILE
Precode* Precode::Allocate(PrecodeType t, MethodDesc* pMD, Precode* Precode::Allocate(PrecodeType t, MethodDesc* pMD,
@ -384,144 +358,6 @@ void Precode::Reset()
} }
} }
/* static */
TADDR Precode::AllocateTemporaryEntryPoints(MethodDescChunk * pChunk,
LoaderAllocator * pLoaderAllocator,
AllocMemTracker * pamTracker)
{
WRAPPER_NO_CONTRACT;
MethodDesc* pFirstMD = pChunk->GetFirstMethodDesc();
int count = pChunk->GetCount();
// Determine eligibility for tiered compilation
#ifdef HAS_COMPACT_ENTRYPOINTS
bool hasMethodDescVersionableWithPrecode = false;
#endif
{
MethodDesc *pMD = pChunk->GetFirstMethodDesc();
bool chunkContainsEligibleMethods = pMD->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk();
#ifdef _DEBUG
// Validate every MethodDesc has the same result for DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk
MethodDesc *pMDDebug = pChunk->GetFirstMethodDesc();
for (int i = 0; i < count; ++i)
{
_ASSERTE(chunkContainsEligibleMethods == pMDDebug->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk());
pMDDebug = (MethodDesc *)(dac_cast<TADDR>(pMDDebug) + pMDDebug->SizeOf());
}
#endif
#ifndef HAS_COMPACT_ENTRYPOINTS
if (chunkContainsEligibleMethods)
#endif
{
for (int i = 0; i < count; ++i)
{
if (chunkContainsEligibleMethods && pMD->DetermineAndSetIsEligibleForTieredCompilation())
{
_ASSERTE(pMD->IsEligibleForTieredCompilation());
_ASSERTE(!pMD->IsVersionableWithPrecode() || pMD->RequiresStableEntryPoint());
}
#ifdef HAS_COMPACT_ENTRYPOINTS
if (pMD->IsVersionableWithPrecode())
{
_ASSERTE(pMD->RequiresStableEntryPoint());
hasMethodDescVersionableWithPrecode = true;
}
#endif
pMD = (MethodDesc *)(dac_cast<TADDR>(pMD) + pMD->SizeOf());
}
}
}
PrecodeType t = PRECODE_STUB;
bool preallocateJumpStubs = false;
#ifdef HAS_FIXUP_PRECODE
// Default to faster fixup precode if possible
t = PRECODE_FIXUP;
#endif // HAS_FIXUP_PRECODE
SIZE_T totalSize = SizeOfTemporaryEntryPoints(t, count);
#ifdef HAS_COMPACT_ENTRYPOINTS
// Note that these are just best guesses to save memory. If we guessed wrong,
// we will allocate a new exact type of precode in GetOrCreatePrecode.
BOOL fForcedPrecode = hasMethodDescVersionableWithPrecode || pFirstMD->RequiresStableEntryPoint(count > 1);
#ifdef TARGET_ARM
if (pFirstMD->RequiresMethodDescCallingConvention(count > 1)
|| count >= MethodDescChunk::GetCompactEntryPointMaxCount ())
{
// We do not pass method desc on scratch register
fForcedPrecode = TRUE;
}
#endif // TARGET_ARM
if (!fForcedPrecode && (totalSize > MethodDescChunk::SizeOfCompactEntryPoints(count)))
return NULL;
#endif
TADDR temporaryEntryPoints;
SIZE_T oneSize = SizeOfTemporaryEntryPoint(t);
MethodDesc * pMD = pChunk->GetFirstMethodDesc();
if (t == PRECODE_FIXUP || t == PRECODE_STUB)
{
LoaderHeap *pStubHeap;
if (t == PRECODE_FIXUP)
{
pStubHeap = pLoaderAllocator->GetFixupPrecodeHeap();
}
else
{
pStubHeap = pLoaderAllocator->GetNewStubPrecodeHeap();
}
temporaryEntryPoints = (TADDR)pamTracker->Track(pStubHeap->AllocAlignedMem(totalSize, 1));
TADDR entryPoint = temporaryEntryPoints;
for (int i = 0; i < count; i++)
{
((Precode *)entryPoint)->Init((Precode *)entryPoint, t, pMD, pLoaderAllocator);
_ASSERTE((Precode *)entryPoint == GetPrecodeForTemporaryEntryPoint(temporaryEntryPoints, i));
entryPoint += oneSize;
pMD = (MethodDesc *)(dac_cast<TADDR>(pMD) + pMD->SizeOf());
}
}
else
{
_ASSERTE(FALSE);
temporaryEntryPoints = (TADDR)pamTracker->Track(pLoaderAllocator->GetPrecodeHeap()->AllocAlignedMem(totalSize, AlignOf(t)));
ExecutableWriterHolder<void> entryPointsWriterHolder((void*)temporaryEntryPoints, totalSize);
TADDR entryPoint = temporaryEntryPoints;
TADDR entryPointRW = (TADDR)entryPointsWriterHolder.GetRW();
for (int i = 0; i < count; i++)
{
((Precode *)entryPointRW)->Init((Precode *)entryPoint, t, pMD, pLoaderAllocator);
_ASSERTE((Precode *)entryPoint == GetPrecodeForTemporaryEntryPoint(temporaryEntryPoints, i));
entryPoint += oneSize;
entryPointRW += oneSize;
pMD = (MethodDesc *)(dac_cast<TADDR>(pMD) + pMD->SizeOf());
}
}
#ifdef FEATURE_PERFMAP
PerfMap::LogStubs(__FUNCTION__, "PRECODE_STUB", (PCODE)temporaryEntryPoints, count * oneSize);
#endif
ClrFlushInstructionCache((LPVOID)temporaryEntryPoints, count * oneSize);
return temporaryEntryPoints;
}
#endif // !DACCESS_COMPILE #endif // !DACCESS_COMPILE
#ifdef DACCESS_COMPILE #ifdef DACCESS_COMPILE
@ -801,13 +637,6 @@ BOOL DoesSlotCallPrestub(PCODE pCode)
TADDR pInstr = dac_cast<TADDR>(PCODEToPINSTR(pCode)); TADDR pInstr = dac_cast<TADDR>(PCODEToPINSTR(pCode));
#ifdef HAS_COMPACT_ENTRYPOINTS
if (MethodDescChunk::GetMethodDescFromCompactEntryPoint(pCode, TRUE) != NULL)
{
return TRUE;
}
#endif
if (!IS_ALIGNED(pInstr, PRECODE_ALIGNMENT)) if (!IS_ALIGNED(pInstr, PRECODE_ALIGNMENT))
{ {
return FALSE; return FALSE;

View file

@ -467,12 +467,6 @@ public:
{ {
SUPPORTS_DAC; SUPPORTS_DAC;
unsigned int align = PRECODE_ALIGNMENT; unsigned int align = PRECODE_ALIGNMENT;
#if defined(TARGET_ARM) && defined(HAS_COMPACT_ENTRYPOINTS)
// Precodes have to be aligned to allow fast compact entry points check
_ASSERTE (align >= sizeof(void*));
#endif // TARGET_ARM && HAS_COMPACT_ENTRYPOINTS
return align; return align;
} }
@ -585,22 +579,6 @@ public:
return ALIGN_UP(SizeOf(t), AlignOf(t)); return ALIGN_UP(SizeOf(t), AlignOf(t));
} }
static Precode * GetPrecodeForTemporaryEntryPoint(TADDR temporaryEntryPoints, int index);
static SIZE_T SizeOfTemporaryEntryPoints(PrecodeType t, int count);
static SIZE_T SizeOfTemporaryEntryPoints(TADDR temporaryEntryPoints, int count);
static TADDR AllocateTemporaryEntryPoints(MethodDescChunk* pChunk,
LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker);
static DWORD GetMaxTemporaryEntryPointsCount()
{
SIZE_T maxPrecodeCodeSize = Max(FixupPrecode::CodeSize, StubPrecode::CodeSize);
SIZE_T count = GetStubCodePageSize() / maxPrecodeCodeSize;
_ASSERTE(count < MAXDWORD);
return (DWORD)count;
}
#ifdef DACCESS_COMPILE #ifdef DACCESS_COMPILE
void EnumMemoryRegions(CLRDataEnumMemoryFlags flags); void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
#endif #endif

View file

@ -145,10 +145,8 @@ PCODE MethodDesc::DoBackpatch(MethodTable * pMT, MethodTable *pDispatchingMT, BO
} }
} }
#ifndef HAS_COMPACT_ENTRYPOINTS
// Patch the fake entrypoint if necessary // Patch the fake entrypoint if necessary
Precode::GetPrecodeFromEntryPoint(pExpected)->SetTargetInterlocked(pTarget); Precode::GetPrecodeFromEntryPoint(pExpected)->SetTargetInterlocked(pTarget);
#endif // HAS_COMPACT_ENTRYPOINTS
} }
if (HasNonVtableSlot()) if (HasNonVtableSlot())
@ -2553,21 +2551,6 @@ Stub * MakeInstantiatingStubWorker(MethodDesc *pMD)
} }
#endif // defined(FEATURE_SHARE_GENERIC_CODE) #endif // defined(FEATURE_SHARE_GENERIC_CODE)
#if defined (HAS_COMPACT_ENTRYPOINTS) && defined (TARGET_ARM)
extern "C" MethodDesc * STDCALL PreStubGetMethodDescForCompactEntryPoint (PCODE pCode)
{
_ASSERTE (pCode >= PC_REG_RELATIVE_OFFSET);
pCode = (PCODE) (pCode - PC_REG_RELATIVE_OFFSET + THUMB_CODE);
_ASSERTE (MethodDescChunk::IsCompactEntryPointAtAddress (pCode));
return MethodDescChunk::GetMethodDescFromCompactEntryPoint(pCode, FALSE);
}
#endif // defined (HAS_COMPACT_ENTRYPOINTS) && defined (TARGET_ARM)
//============================================================================= //=============================================================================
// This function generates the real code when from Preemptive mode. // This function generates the real code when from Preemptive mode.
// It is specifically designed to work with the UnmanagedCallersOnlyAttribute. // It is specifically designed to work with the UnmanagedCallersOnlyAttribute.
@ -2859,7 +2842,7 @@ PCODE MethodDesc::DoPrestub(MethodTable *pDispatchingMT, CallerGCMode callerGCMo
{ {
pCode = GetStubForInteropMethod(this); pCode = GetStubForInteropMethod(this);
GetPrecode()->SetTargetInterlocked(pCode); GetOrCreatePrecode()->SetTargetInterlocked(pCode);
RETURN GetStableEntryPoint(); RETURN GetStableEntryPoint();
} }
@ -3284,6 +3267,7 @@ EXTERN_C PCODE STDCALL ExternalMethodFixupWorker(TransitionBlock * pTransitionBl
if (pMD->IsVtableMethod()) if (pMD->IsVtableMethod())
{ {
slot = pMD->GetSlot(); slot = pMD->GetSlot();
pMD->GetMethodTable()->GetRestoredSlot(slot); // Ensure that the target slot has an entrypoint
pMT = th.IsNull() ? pMD->GetMethodTable() : th.GetMethodTable(); pMT = th.IsNull() ? pMD->GetMethodTable() : th.GetMethodTable();
fVirtual = true; fVirtual = true;

View file

@ -1511,6 +1511,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s
void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect)
{ {
STANDARD_VM_CONTRACT;
BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP;
if (!fTailCall) if (!fTailCall)
variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); variationCode = static_cast<BranchInstructionFormat::VariationCodes>(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL);
@ -1522,10 +1524,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir
void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall)
{ {
STANDARD_VM_CONTRACT;
PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT);
// Use direct call if possible. // Use direct call if possible.
if (pMD->HasStableEntryPoint()) if (multiCallableAddr != (PCODE)NULL)
{ {
EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE);
} }
else else
{ {

View file

@ -1009,13 +1009,6 @@ BOOL PrecodeStubManager::DoTraceStub(PCODE stubStartAddress,
MethodDesc* pMD = NULL; MethodDesc* pMD = NULL;
#ifdef HAS_COMPACT_ENTRYPOINTS
if (MethodDescChunk::IsCompactEntryPointAtAddress(stubStartAddress))
{
pMD = MethodDescChunk::GetMethodDescFromCompactEntryPoint(stubStartAddress);
}
else
#endif // HAS_COMPACT_ENTRYPOINTS
{ {
// When the target slot points to the fixup part of the fixup precode, we need to compensate // When the target slot points to the fixup part of the fixup precode, we need to compensate
// for that to get the actual stub address // for that to get the actual stub address

View file

@ -985,6 +985,7 @@ PCODE VirtualCallStubManager::GetCallStub(TypeHandle ownerType, DWORD slot)
GCX_COOP(); // This is necessary for BucketTable synchronization GCX_COOP(); // This is necessary for BucketTable synchronization
MethodTable * pMT = ownerType.GetMethodTable(); MethodTable * pMT = ownerType.GetMethodTable();
pMT->GetRestoredSlot(slot);
DispatchToken token; DispatchToken token;
if (pMT->IsInterface()) if (pMT->IsInterface())
@ -2131,7 +2132,7 @@ VirtualCallStubManager::GetRepresentativeMethodDescFromToken(
token = DispatchToken::CreateDispatchToken(token.GetSlotNumber()); token = DispatchToken::CreateDispatchToken(token.GetSlotNumber());
} }
CONSISTENCY_CHECK(token.IsThisToken()); CONSISTENCY_CHECK(token.IsThisToken());
RETURN (pMT->GetMethodDescForSlot(token.GetSlotNumber())); RETURN (pMT->GetMethodDescForSlot_NoThrow(token.GetSlotNumber()));
} }
//---------------------------------------------------------------------------- //----------------------------------------------------------------------------
@ -2163,7 +2164,7 @@ MethodDesc *VirtualCallStubManager::GetInterfaceMethodDescFromToken(DispatchToke
MethodTable * pMT = GetTypeFromToken(token); MethodTable * pMT = GetTypeFromToken(token);
PREFIX_ASSUME(pMT != NULL); PREFIX_ASSUME(pMT != NULL);
CONSISTENCY_CHECK(CheckPointer(pMT)); CONSISTENCY_CHECK(CheckPointer(pMT));
return pMT->GetMethodDescForSlot(token.GetSlotNumber()); return pMT->GetMethodDescForSlot_NoThrow(token.GetSlotNumber());
#else // DACCESS_COMPILE #else // DACCESS_COMPILE