1
0
Fork 0
mirror of https://github.com/VSadov/Satori.git synced 2025-06-09 17:44:48 +09:00
This commit is contained in:
Adeel Mujahid 2022-05-07 21:55:53 +03:00 committed by GitHub
parent e2119d4491
commit 4141913109
Signed by: github
GPG key ID: 4AEE18F83AFDEB23
311 changed files with 981 additions and 985 deletions

View file

@ -16,4 +16,4 @@ Event Logging is a mechanism by which CoreClr can provide a variety of informati
# Adding New Logging System
Though the the Event logging system was designed for ETW, the build system provides a mechanism, basically an [adapter script- genEventing.py](../../src/coreclr/scripts/genEventing.py) so that other Logging System can be added and used by CoreClr. An Example of such an extension for [LTTng logging system](https://lttng.org/) can be found in [genLttngProvider.py](../../src/coreclr/scripts/genLttngProvider.py )
Though the Event logging system was designed for ETW, the build system provides a mechanism, basically an [adapter script- genEventing.py](../../src/coreclr/scripts/genEventing.py) so that other Logging System can be added and used by CoreClr. An Example of such an extension for [LTTng logging system](https://lttng.org/) can be found in [genLttngProvider.py](../../src/coreclr/scripts/genLttngProvider.py )

View file

@ -119,7 +119,7 @@ This uses the `RidMap` to lookup the `MethodDesc`. If you look at the definition
This represents a target address, but it's not really a pointer; it's simply a number (although it represents an address). The problem is that `LookupMethodDef` needs to return the address of a `MethodDesc` that we can dereference. To accomplish this, the function uses a `dac_cast` to `PTR_MethodDesc` to convert the `TADDR` to a `PTR_MethodDesc`. You can think of this as the target address space form of a cast from `void *` to `MethodDesc *`. In fact, this code would be slightly cleander if `GetFromRidMap` returned a `PTR_VOID` (with pointer semantics) instead of a `TADDR` (with integer semantics). Again, the type conversion implicit in the return statement ensures that the DAC marshals the object (if necessary) and returns the host address of the `MethodDesc` in the DAC cache.
The assignment statement in `GetFromRidMap` indexes an array to get a particular value. The `pMap` parameter is the address of a structure field from the `MethodDesc`. As such, the DAC will have copied the entire field into the cache when it marshaled the `MethodDesc` instance. Thus, `pMap`, which is the address of this struct, is a host pointer. Dereferencing it does not involve the DAC at all. The `pTable` field, however, is a `PTR_TADDR`. What this tells us is that `pTable` is an array of target addresses, but its type indicates that it is a marshaled type. This means that `pTable` will be a target address as well. We dereference it with the overloaded indexing operator for the `PTR` type. This will get the target address of the array and compute the target address of the element we want. The last step of indexing marshals the array element back to a host instance in the DAC cache and returns its value. We assign the the element (a `TADDR`) to the local variable result and return it.
The assignment statement in `GetFromRidMap` indexes an array to get a particular value. The `pMap` parameter is the address of a structure field from the `MethodDesc`. As such, the DAC will have copied the entire field into the cache when it marshaled the `MethodDesc` instance. Thus, `pMap`, which is the address of this struct, is a host pointer. Dereferencing it does not involve the DAC at all. The `pTable` field, however, is a `PTR_TADDR`. What this tells us is that `pTable` is an array of target addresses, but its type indicates that it is a marshaled type. This means that `pTable` will be a target address as well. We dereference it with the overloaded indexing operator for the `PTR` type. This will get the target address of the array and compute the target address of the element we want. The last step of indexing marshals the array element back to a host instance in the DAC cache and returns its value. We assign the element (a `TADDR`) to the local variable result and return it.
Finally, to get the code address, the DAC/DBI interface function will call `MethodDesc::GetNativeCode`. This function returns a value of type `PCODE`. This type is a target address, but one that we cannot dereference (it is just an alias of `TADDR`) and one that we use specifically to specify a code address. We store this value on the `ICorDebugFunction` instance and return it to the debugger.

View file

@ -101,7 +101,7 @@ Exception throwing within the type system is wrapped in a `ThrowHelper` class. T
The type system provides a default implementation of the `ThrowHelper` class that throws exceptions deriving from a `TypeSystemException` exception base class. This default implementation is suitable for use in non-runtime scenarios.
The exception messages are assigned string IDs and get consumed by the throw helper as well. We require this indirection to support the compiler scenarios: when a type loading exception occurs during an AOT compilation, the AOT compiler has two tasks - emit a warning to warn the user that this occured, and potentially generate a method body that will throw this exception at runtime when the problematic type is accessed. The localization of the compiler might not match the localization of the class library the compiler output is linking against. Indirecting the actual exception message through the string ID lets us wrap this. The consumer of the type system may reuse the throw helper in places outside the type system where this functionality is needed.
The exception messages are assigned string IDs and get consumed by the throw helper as well. We require this indirection to support the compiler scenarios: when a type loading exception occurs during an AOT compilation, the AOT compiler has two tasks - emit a warning to warn the user that this occurred, and potentially generate a method body that will throw this exception at runtime when the problematic type is accessed. The localization of the compiler might not match the localization of the class library the compiler output is linking against. Indirecting the actual exception message through the string ID lets us wrap this. The consumer of the type system may reuse the throw helper in places outside the type system where this functionality is needed.
## Physical architecture

View file

@ -198,7 +198,7 @@ On `BasicBlock` boundaries:
This is handled in `LinearScan::recordVarLocationsAtStartOfBB(BasicBlock* bb)`.
- If a variable doesn't have an open `VariableLiveRange` and is in `bbLiveIn`, we open one.
This is done in `genUpdateLife` immediately after the the previous method is called.
This is done in `genUpdateLife` immediately after the previous method is called.
- If a variable has an open `VariableLiveRange` and is not in `bbLiveIn`, we close it.
This is handled in `genUpdateLife` too.

View file

@ -11,7 +11,7 @@ One of the common use cases of the `ICorProfiler*` interfaces is to perform IL r
There are two ways to rewrite IL
1. At Module load time with `ICorProfilerInfo::SetILFunctionBody`
This approach has the benefit that it is 'set it and forget it'. You can replace the IL at module load, and the runtime will treat this new IL as if the module contained that IL - you don't have to worry about any of the quirks of ReJIT. The downside is that is is unrevertable - once it is set, you cannot change your mind.
This approach has the benefit that it is 'set it and forget it'. You can replace the IL at module load, and the runtime will treat this new IL as if the module contained that IL - you don't have to worry about any of the quirks of ReJIT. The downside is that it is unrevertable - once it is set, you cannot change your mind.
2. At any point during the process lifetime with `ICorProfilerInfo4::RequestReJIT` or `ICorProfilerInfo10::RequestReJITWithInliners`.
This approach means that you can modify functions in response to changing conditions, and you can revert the modified code if you decide you are done with it. See the other entries about ReJIT in this folder for more information.

View file

@ -14,7 +14,7 @@ When EventCounter was first designed, it was tailored towards aggregating a set
### Multi-client support ###
**Emit data to all sessions at the rates requested by all clients** - This requires a little extra complexity in the runtime to maintain potentially multiple concurrent aggregations, and it is more verbose in the event stream if that is occuring. Clients need to filter out responses that don't match their requested rate, which is a little more complex than ideal, but still simpler than needing to synthesize statistics. In the case of multiple clients we can still encourage people to use a few canonical rates such as per-second, per-10 seconds, per-minute, per-hour which makes it likely that similar use cases will be able to share the exact same set of events. In the worst case that a few different aggregations are happening in parallel the overhead of our common counter aggregations shouldn't be that high, otherwise they weren't very suitable for lightweight monitoring in the first place. In terms of runtime code complexity I think the difference between supporting 1 aggregation and N aggregations is probably <50 lines per counter type and we only have a few counter types.
**Emit data to all sessions at the rates requested by all clients** - This requires a little extra complexity in the runtime to maintain potentially multiple concurrent aggregations, and it is more verbose in the event stream if that is occurring. Clients need to filter out responses that don't match their requested rate, which is a little more complex than ideal, but still simpler than needing to synthesize statistics. In the case of multiple clients we can still encourage people to use a few canonical rates such as per-second, per-10 seconds, per-minute, per-hour which makes it likely that similar use cases will be able to share the exact same set of events. In the worst case that a few different aggregations are happening in parallel the overhead of our common counter aggregations shouldn't be that high, otherwise they weren't very suitable for lightweight monitoring in the first place. In terms of runtime code complexity I think the difference between supporting 1 aggregation and N aggregations is probably <50 lines per counter type and we only have a few counter types.
Doing the filtering requires that each client can identify which EventCounter data packets are the ones it asked for and which are unrelated. Using IntervalSec as I had originally intended does not work because IntervalSec contains the exact amount of time measured in each interval rather than the nominal interval the client requested. For example a client that asks for EventCounterIntervalSec=1 could see packets that have IntervalSec=1.002038, IntervalSec=0.997838, etc. To resolve this we will add another key/pair to the payload, Series="Interval=T", where T is the number of seconds that was passed to EventCounterIntervalSec. To ensure clients with basically the same needs don't arbitrarily create different series that are identical or near identical we enforce that IntervalSec is always a whole non-negative number of seconds. Any value that can't be parsed by uint.TryParse() will be interpreted the same as IntervalSec=0. Using leading zeros on the number, ie IntervalSec=0002 may or may not work so clients are discouraged from doing so (in practice, its whatever text uint.TryParse handles).

View file

@ -208,7 +208,7 @@ int hostfxr_get_runtime_properties(
```
Get all runtime properties for the specified host context.
* `host_context_handle` - initialized host context. If set to `nullptr` the function will operate on the first host context in the process.
* `count` - in/out parameter which must not be `nullptr`. On input it specifies the size of the the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned.
* `count` - in/out parameter which must not be `nullptr`. On input it specifies the size of the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned.
* `keys` - buffer which acts as an array of pointers to buffers with keys for the runtime properties.
* `values` - buffer which acts as an array of pointer to buffers with values for the runtime properties.
@ -259,7 +259,7 @@ int corehost_load(host_interface_t *init)
Initialize `hostpolicy`. This stores information that will be required to do all the processing necessary to start CoreCLR, but it does not actually do any of that processing.
* `init` - structure defining how the library should be initialized
If already initalized, this function returns success without reinitializing (`init` is ignored).
If already initialized, this function returns success without reinitializing (`init` is ignored).
``` C
int corehost_main(const int argc, const char_t* argv[])

View file

@ -334,7 +334,7 @@ int hostfxr_get_runtime_properties(
Returns the full set of all runtime properties for the specified host context.
* `host_context_handle` - the initialized host context. If set to `NULL` the function will operate on runtime properties of the first host context in the process.
* `count` - in/out parameter which must not be `NULL`. On input it specifies the size of the the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned. If the size of the buffers is too small, the function returns a specific error code and fill the `count` with the number of available properties. If `keys` or `values` is `NULL` the function ignores the input value of `count` and just returns the number of properties.
* `count` - in/out parameter which must not be `NULL`. On input it specifies the size of the `keys` and `values` buffers. On output it contains the number of entries used from `keys` and `values` buffers - the number of properties returned. If the size of the buffers is too small, the function returns a specific error code and fill the `count` with the number of available properties. If `keys` or `values` is `NULL` the function ignores the input value of `count` and just returns the number of properties.
* `keys` - buffer which acts as an array of pointers to buffers with keys for the runtime properties.
* `values` - buffer which acts as an array of pointer to buffers with values for the runtime properties.

View file

@ -177,7 +177,7 @@ GC.
## Outstanding Questions
How can we provide the most useful error message when a standalone GC fails to load? In the past it has been difficult
to determine what preciscely has gone wrong with `coreclr_initialize` returns a HRESULT and no indication of what occured.
to determine what preciscely has gone wrong with `coreclr_initialize` returns a HRESULT and no indication of what occurred.
Same question for the DAC - Is `E_FAIL` the best we can do? If we could define our own error for DAC/GC version
mismatches, that would be nice; however, that is technically a breaking change in the DAC.

View file

@ -66,7 +66,7 @@ There are two mechanisms that need to be satisfied in order for a Tier0 method t
1. The method needs to be called at least 30 times, as measured by the call counter, and this gives us a rough notion that the method is 'hot'. The number 30 was derived with a small amount of early empirical testing but there hasn't been a large amount of effort applied in checking if the number is optimal. We assumed that both the policy and the sample benchmarks we were measuring would be in a state of flux for a while to come so there wasn't much reason to spend a lot of time finding the exact maximum of a shifting curve. As best we can tell there is also not a steep response between changes in this value and changes in the performance of many scenarios. An order of magnitude should produce a notable difference but +-5 can vanish into the noise.
2. At startup a timer is initiated with a 100ms timeout. If any Tier0 jitting occurs while the timer is running then it is reset. If the timer completes without any Tier0 jitting then, and only then, is call counting allowed to commence. This means a method could be called 1000 times in the first 100ms, but the timer will still need to expire and have the method called 30 more times before it is eligible for Tier1. The reason for the timer is to measure whether or not Tier0 jitting is still occuring, which is a heuristic to measure whether or not the application is still in its startup phase. Before adding the timer we observed that both the call counter and background threads compiling Tier1 code versions were slowing down the foreground threads trying to complete startup, and this could result in losing all the startup performance wins from Tier0 jitting. By delaying until after 'startup' the Tier0 code is left running longer, but that was nearly always a better performing outcome than trying to replace it with Tier1 code too eagerly.
2. At startup a timer is initiated with a 100ms timeout. If any Tier0 jitting occurs while the timer is running then it is reset. If the timer completes without any Tier0 jitting then, and only then, is call counting allowed to commence. This means a method could be called 1000 times in the first 100ms, but the timer will still need to expire and have the method called 30 more times before it is eligible for Tier1. The reason for the timer is to measure whether or not Tier0 jitting is still occurring, which is a heuristic to measure whether or not the application is still in its startup phase. Before adding the timer we observed that both the call counter and background threads compiling Tier1 code versions were slowing down the foreground threads trying to complete startup, and this could result in losing all the startup performance wins from Tier0 jitting. By delaying until after 'startup' the Tier0 code is left running longer, but that was nearly always a better performing outcome than trying to replace it with Tier1 code too eagerly.
After these two conditions are satisfied the method is placed in a queue for Tier1 compilation, compiled on a background thread, and then the Tier1 version is made active.

View file

@ -30,7 +30,7 @@ You need to be careful when reproducing failures to set all the correct environm
test failure console log, you find:
```
C:\h\w\AE88094B\w\B1B409BF\e>set COMPlus
C:\h\w\AE88094B\w\B1B409BF\e>set COMPlus
COMPlus_JitStress=1
COMPlus_TieredCompilation=0
```
@ -50,7 +50,7 @@ COMPlus_DbgEnableMiniDump=1
You might need to set variables in addition to the `COMPlus_*` (equivalently, `DOTNET_*`) variables. For example, you might see:
```
set RunCrossGen2=1
set RunCrossGen2=1
```
which instructs the coreclr test wrapper script to do crossgen2 compilation of the test.
@ -128,7 +128,7 @@ Jobs
| where Type1 contains test_name
and Status <> "Pass" and (Method == "cmd" or Method == "sh")
| project Queued, Pipeline = parse_json(Properties).DefinitionName, Pipeline_Configuration = parse_json(Properties).configuration,
OS = QueueName, Arch = parse_json(Properties).architecture, Test = Type1, Result, Duration, Console_log = Message, WorkItemFriendlyName, Method
OS = QueueName, Arch = parse_json(Properties).architecture, Test = Type1, Result, Duration, Console_log = Message, WorkItemFriendlyName, Method
| order by Queued desc
| limit 100
```
@ -194,7 +194,7 @@ of failures:
- A bug in the GC stress infrastructure.
- A bug in the GC itself.
Note the the value `COMPlus_GCStress` is set to is a bitmask. Failures with 0x1 or 0x2 (and thus 0x3) are typically VM failures.
Note the value `COMPlus_GCStress` is set to is a bitmask. Failures with 0x1 or 0x2 (and thus 0x3) are typically VM failures.
Failures with 0x4 or 0x8 (and thus 0xC) are typically JIT failures. Ideally, a failure can be reduced to fail with only a single
bit set (that is, either 0x4 or 0x8, which is more specific than just 0xC). That is especially true for 0xF, where we don't know if
it's likely a VM or a JIT failure without reducing it.

View file

@ -17,7 +17,7 @@ parameters:
pgoType: ''
### Build managed test components (native components are getting built as part
### of the the product build job).
### of the product build job).
### TODO: As of today, build of managed test components requires the product build
### as a prerequisite due to dependency on System.Private.Corelib. After switching
@ -97,7 +97,7 @@ jobs:
- name: testTreeFilterArg
value: ''
# Only build GCSimulator tests when the gc-simulator group is specified.
- ${{ if eq(parameters.testGroup, 'gc-simulator') }}:
- ${{ if eq(parameters.osGroup, 'windows') }}:

View file

@ -36,7 +36,7 @@ namespace System.Reflection
Justification = "Module.ResolveMethod is marked as RequiresUnreferencedCode because it relies on tokens" +
"which are not guaranteed to be stable across trimming. So if somebody hardcodes a token it could break." +
"The usage here is not like that as all these tokens come from existing metadata loaded from some IL" +
"and so trimming has no effect (the tokens are read AFTER trimming occured).")]
"and so trimming has no effect (the tokens are read AFTER trimming occurred).")]
private static RuntimeMethodInfo? AssignAssociates(
int tkMethod,
RuntimeType declaredType,

View file

@ -552,7 +552,7 @@ namespace System.Reflection.Emit
// will overflow the stack when there are many methods on the same type (10000 in my experiment).
// The change also introduced race conditions. Before the code change GetToken is called from
// the MethodBuilder .ctor which is protected by lock(ModuleBuilder.SyncRoot). Now it
// could be called more than once on the the same method introducing duplicate (invalid) tokens.
// could be called more than once on the same method introducing duplicate (invalid) tokens.
// I don't fully understand this change. So I will keep the logic and only fix the recursion and
// the race condition.

View file

@ -397,7 +397,7 @@ namespace System.Reflection.Emit
Justification = "Module.ResolveMethod is marked as RequiresUnreferencedCode because it relies on tokens " +
"which are not guaranteed to be stable across trimming. So if somebody hardcodes a token it could break. " +
"The usage here is not like that as all these tokens come from existing metadata loaded from some IL " +
"and so trimming has no effect (the tokens are read AFTER trimming occured).")]
"and so trimming has no effect (the tokens are read AFTER trimming occurred).")]
private static MethodBase GetGenericMethodBaseDefinition(MethodBase methodBase)
{
// methodInfo = G<Foo>.M<Bar> ==> methDef = G<T>.M<S>

View file

@ -1273,7 +1273,7 @@ namespace System.Reflection
Justification = "Module.ResolveMethod and Module.ResolveType are marked as RequiresUnreferencedCode because they rely on tokens" +
"which are not guaranteed to be stable across trimming. So if somebody hardcodes a token it could break." +
"The usage here is not like that as all these tokens come from existing metadata loaded from some IL" +
"and so trimming has no effect (the tokens are read AFTER trimming occured).")]
"and so trimming has no effect (the tokens are read AFTER trimming occurred).")]
private static bool FilterCustomAttributeRecord(
MetadataToken caCtorToken,
in MetadataImport scope,
@ -1427,7 +1427,7 @@ namespace System.Reflection
Justification = "Module.ResolveType is marked as RequiresUnreferencedCode because it relies on tokens" +
"which are not guaranteed to be stable across trimming. So if somebody hardcodes a token it could break." +
"The usage here is not like that as all these tokens come from existing metadata loaded from some IL" +
"and so trimming has no effect (the tokens are read AFTER trimming occured).")]
"and so trimming has no effect (the tokens are read AFTER trimming occurred).")]
internal static AttributeUsageAttribute GetAttributeUsage(RuntimeType decoratedAttribute)
{
RuntimeModule decoratedModule = decoratedAttribute.GetRuntimeModule();

View file

@ -43,7 +43,7 @@ namespace System.Reflection
Justification = "Module.ResolveType is marked as RequiresUnreferencedCode because it relies on tokens" +
"which are not guaranteed to be stable across trimming. So if somebody hardcodes a token it could break." +
"The usage here is not like that as all these tokens come from existing metadata loaded from some IL" +
"and so trimming has no effect (the tokens are read AFTER trimming occured).")]
"and so trimming has no effect (the tokens are read AFTER trimming occurred).")]
get
{
if (_flags != ExceptionHandlingClauseOptions.Clause)

View file

@ -150,7 +150,7 @@ namespace System.Threading
public static extern object? CompareExchange(ref object? location1, object? value, object? comparand);
// Note that getILIntrinsicImplementationForInterlocked() in vm\jitinterface.cpp replaces
// the body of the following method with the the following IL:
// the body of the following method with the following IL:
// ldarg.0
// ldarg.1
// ldarg.2

View file

@ -80,7 +80,7 @@ DumpWriter::WriteDump()
shdr.sh_size = 1;
offset += sizeof(Shdr);
// When section header offset is present but ehdr section num = 0 then is is expected that
// When section header offset is present but ehdr section num = 0 then it is expected that
// the sh_size indicates the size of the section array or 1 in our case.
if (!WriteData(&shdr, sizeof(shdr))) {
return false;

View file

@ -121,7 +121,7 @@ void DacDbiInterfaceImpl::CreateStackWalk(VMPTR_Thread vmThread,
// allocate memory for various stackwalker buffers (StackFrameIterator, RegDisplay, Context)
AllocateStackwalk(ppSFIHandle, pThread, NULL, dwFlags);
// initialize the the CONTEXT.
// initialize the CONTEXT.
// SetStackWalk will initial the RegDisplay from this context.
GetContext(vmThread, pInternalContextBuffer);
@ -819,7 +819,7 @@ void DacDbiInterfaceImpl::InitFrameData(StackFrameIterator * pIter,
// Here we detect (and set the appropriate flag) if the nativeOffset in the current frame points to the return address of IL_Throw()
// (or other exception related JIT helpers like IL_Throw, IL_Rethrow, JIT_RngChkFail, IL_VerificationError, JIT_Overflow etc).
// Since return addres point to the next(!) instruction after [call IL_Throw] this sometimes can lead to incorrect exception stacktraces
// where a next source line is spotted as an exception origin. This happends when the next instruction after [call IL_Throw] belongs to
// where a next source line is spotted as an exception origin. This happens when the next instruction after [call IL_Throw] belongs to
// a sequence point and a source line different from a sequence point and a source line of [call IL_Throw].
// Later on this flag is used in order to adjust nativeOffset and make ICorDebugILFrame::GetIP return IL offset withing
// the same sequence point as an actuall IL throw instruction.

View file

@ -4,7 +4,7 @@
#include <stdafx.h>
/* There is no DAC build of gcdump, so instead
* build it directly into the the dac. That's what all these ugly defines
* build it directly into the dac. That's what all these ugly defines
* are all about.
*/
#ifdef __MSC_VER

View file

@ -12786,7 +12786,7 @@ void CordbProcess::HandleDebugEventForInteropDebugging(const DEBUG_EVENT * pEven
}
#endif
// This call will decide what to do w/ the the win32 event we just got. It does a lot of work.
// This call will decide what to do w/ the win32 event we just got. It does a lot of work.
Reaction reaction = TriageWin32DebugEvent(pUnmanagedThread, pEvent);

View file

@ -618,7 +618,7 @@ HRESULT CordbClass::SetJMCStatus(BOOL fIsUserCode)
}
//-----------------------------------------------------------------------------
// We have to go the the EE to find out if a class is a value
// We have to go the EE to find out if a class is a value
// class or not. This is because there is no flag for this, but rather
// it depends on whether the class subclasses System.ValueType (apart
// from System.Enum...). Replicating all that resoultion logic
@ -920,7 +920,7 @@ HRESULT FieldData::GetFieldSignature(CordbModule *pModule,
// Initializes an instance of EnCHangingFieldInfo.
// Arguments:
// input: fStatic - flag to indicate whether the EnC field is static
// pObject - For instance fields, the Object instance containing the the sync-block.
// pObject - For instance fields, the Object instance containing the sync-block.
// For static fields (if this is being called from GetStaticFieldValue) object is NULL.
// fieldToken - token for the EnC field
// metadataToken - metadata token for this instance of CordbClass
@ -974,7 +974,7 @@ void CordbClass::InitEnCFieldInfo(EnCHangingFieldInfo * pEncField,
// Get information via the DAC about a field added with Edit and Continue.
// Arguments:
// input: fStatic - flag to indicate whether the EnC field is static
// pObject - For instance fields, the Object instance containing the the sync-block.
// pObject - For instance fields, the Object instance containing the sync-block.
// For static fields (if this is being called from GetStaticFieldValue) object is NULL.
// fieldToken - token for the EnC field
// output: pointer to an initialized instance of FieldData that has been added to the appropriate table
@ -1029,7 +1029,7 @@ FieldData * CordbClass::GetEnCFieldFromDac(BOOL fStatic,
//
// Arguments:
// input: fldToken - field of interest to get.
// pObject - For instance fields, the Object instance containing the the sync-block.
// pObject - For instance fields, the Object instance containing the sync-block.
// For static fields (if this is being called from GetStaticFieldValue) object is NULL.
// output: ppFieldData - the FieldData matching the fldToken.
//

View file

@ -222,7 +222,7 @@ HRESULT CordbEnumerator<ElemType,
}
// ICorDebugEnum::GetCount
// Gets the number of items in the the list that is being enumerated
// Gets the number of items in the list that is being enumerated
//
// Arguments:
// pcelt - on return the number of items being enumerated

View file

@ -4090,7 +4090,7 @@ private:
// DAC
//
// Try to initalize DAC, may fail
// Try to initialize DAC, may fail
BOOL TryInitializeDac();
// Expect DAC initialize to succeed.
@ -4828,7 +4828,7 @@ public:
CordbClass * tycon,
CordbType ** pRes);
// Prepare data to send back to left-side during Init() and FuncEval. Fail if the the exact
// Prepare data to send back to left-side during Init() and FuncEval. Fail if the exact
// type data is requested but was not fetched correctly during Init()
HRESULT TypeToBasicTypeData(DebuggerIPCE_BasicTypeData *data);
void TypeToExpandedTypeData(DebuggerIPCE_ExpandedTypeData *data);
@ -10069,7 +10069,7 @@ public:
VMPTR_OBJECTHANDLE m_vmThreadOldExceptionHandle; // object handle for thread's managed exception object.
#ifdef _DEBUG
// Func-eval should perturb the the thread's current appdomain. So we remember it at start
// Func-eval should perturb the thread's current appdomain. So we remember it at start
// and then ensure that the func-eval complete restores it.
CordbAppDomain * m_DbgAppDomainStarted;
#endif

View file

@ -3533,7 +3533,7 @@ HRESULT CordbUnmanagedThread::GetThreadContext(DT_CONTEXT* pContext)
// M2UHandoff uses #1 if available and then falls back to #2.
//
// The reasoning here is that the first three hijacks are intended to be transparent. Since
// the debugger shouldn't know they are occuring then it shouldn't see changes potentially
// the debugger shouldn't know they are occurring then it shouldn't see changes potentially
// made on the LS. The M2UHandoff is not transparent, it has to update the context in order
// to get clear of a bp.
//
@ -8096,7 +8096,7 @@ HRESULT CordbJITILFrame::FabricateNativeInfo(DWORD dwIndex,
IfFailThrow(pArgType->GetUnboxedObjectSize(&cbType));
#if defined(TARGET_X86) // STACK_GROWS_DOWN_ON_ARGS_WALK
// The the rpCur pointer starts off in the right spot for the
// The rpCur pointer starts off in the right spot for the
// first argument, but thereafter we have to decrement it
// before getting the variable's location from it. So increment
// it here to be consistent later.

View file

@ -2138,7 +2138,7 @@ static inline bool _IsNonGCRootHelper(CordbType * pType)
//-----------------------------------------------------------------------------
bool CordbType::IsGCRoot()
{
// If it's a E_T_PTR type, then look at what it's a a pointer of.
// If it's a E_T_PTR type, then check its pointer type.
CordbType * pPtr = this->GetPointerElementType();
if (pPtr == NULL)
{

View file

@ -1602,7 +1602,7 @@ HMODULE ShimProcess::GetDacModule(PathString& dacModulePath)
if (wszAccessDllPath.IsEmpty())
{
//
// Load the access DLL from the same directory as the the current CLR Debugging Services DLL.
// Load the access DLL from the same directory as the current CLR Debugging Services DLL.
//
if (GetClrModuleDirectory(wszAccessDllPath) != S_OK)
{

View file

@ -150,7 +150,7 @@ gcc -g opcodes.cpp -o opcodes
In investigating the various disassembly formats, the `intel`
disassembly format is superior to the `att` format. This is because the
`intel` format clearly marks the the instruction relative accesses and
`intel` format clearly marks the instruction relative accesses and
their sizes. For instance:
- "BYTE PTR [rip+0x53525150]"

View file

@ -4627,7 +4627,7 @@ void DebuggerPatchSkip::CopyInstructionBlock(BYTE *to, const BYTE* from)
}
PAL_EXCEPT_FILTER(FilterAccessViolation2)
{
// The whole point is that if we copy up the the AV, then
// The whole point is that if we copy up the AV, then
// that's enough to execute, otherwise we would not have been
// able to execute the code anyway. So we just ignore the
// exception.

View file

@ -1926,7 +1926,7 @@ class DebuggerEnCBreakpoint : public DebuggerController
{
public:
// We have two types of EnC breakpoints. The first is the one we
// sprinkle through old code to let us know when execution is occuring
// sprinkle through old code to let us know when execution is occurring
// in a function that now has a new version. The second is when we've
// actually resumed excecution into a remapped function and we need
// to then notify the debugger.

View file

@ -5623,7 +5623,7 @@ void Debugger::TraceCall(const BYTE *code)
EX_TRY
{
// Since we have a try catch and the debugger code can deal properly with
// faults occuring inside DebuggerController::DispatchTraceCall, we can safely
// faults occurring inside DebuggerController::DispatchTraceCall, we can safely
// establish a FAULT_NOT_FATAL region. This is required since some callers can't
// tolerate faults.
FAULT_NOT_FATAL();
@ -8935,7 +8935,7 @@ void Debugger::SendUserBreakpoint(Thread * thread)
}
else if (dbgAction == ATTACH_TERMINATE)
{
// ATTACH_TERMINATE indicates the the user wants to terminate the app.
// ATTACH_TERMINATE indicates the user wants to terminate the app.
LOG((LF_CORDB, LL_INFO10000, "D::SUB: terminating this process due to user request\n"));
// Should this go through the host?
@ -13944,7 +13944,7 @@ DWORD Debugger::GetHelperThreadID(void )
// HRESULT Debugger::InsertToMethodInfoList(): Make sure
// that there's only one head of the the list of DebuggerMethodInfos
// that there's only one head of the list of DebuggerMethodInfos
// for the (implicitly) given MethodDef/Module pair.
HRESULT
Debugger::InsertToMethodInfoList( DebuggerMethodInfo *dmi )
@ -14132,7 +14132,7 @@ void Debugger::SendMDANotification(
DebuggerIPCControlBlock *pDCB = m_pRCThread->GetDCB();
// If the MDA is ocuring very early in startup before the DCB is setup, then bail.
// If the MDA is occurring very early in startup before the DCB is setup, then bail.
if (pDCB == NULL)
{
return;
@ -14250,7 +14250,7 @@ void Debugger::SendLogMessage(int iLevel,
LOG((LF_CORDB, LL_INFO10000, "D::SLM: Sending log message.\n"));
// Send the message only if the debugger is attached to this appdomain.
// Note the the debugger may detach at any time, so we'll have to check
// Note the debugger may detach at any time, so we'll have to check
// this again after we get the lock.
AppDomain *pAppDomain = g_pEEInterface->GetThread()->GetDomain();
@ -14401,7 +14401,7 @@ void Debugger::SendCustomDebuggerNotification(Thread * pThread,
LOG((LF_CORDB, LL_INFO10000, "D::SLM: Sending log message.\n"));
// Send the message only if the debugger is attached to this appdomain.
// Note the the debugger may detach at any time, so we'll have to check
// Note the debugger may detach at any time, so we'll have to check
// this again after we get the lock.
if (!CORDebuggerAttached())
{

View file

@ -629,7 +629,7 @@ protected:
// The "debugger data lock" is a very small leaf lock used to protect debugger internal data structures (such
// as DJIs, DMIs, module table). It is a GC-unsafe-anymode lock and so it can't trigger a GC while being held.
// It also can't issue any callbacks into the EE or anycode that it does not directly control.
// This is a separate lock from the the larger Debugger-lock / Controller lock, which allows regions under those
// This is a separate lock from the larger Debugger-lock / Controller lock, which allows regions under those
// locks to access debugger datastructures w/o blocking each other.
Crst m_DebuggerDataLock;
HANDLE m_CtrlCMutex;

View file

@ -2168,7 +2168,7 @@ void GatherFuncEvalMethodInfo(DebuggerEval *pDE,
// object ref as the stack.
//
// Note that we are passing ELEMENT_TYPE_END in the last parameter because we want to
// supress the the valid object ref check.
// supress the valid object ref check.
//
GetFuncEvalArgValue(pDE,
&(argData[0]),

View file

@ -821,7 +821,7 @@ bool DebuggerRCThread::HandleRSEA()
memcpy(e, GetIPCEventReceiveBuffer(), CorDBIPC_BUFFER_SIZE);
#else
// Be sure to fetch the event into the official receive buffer since some event handlers assume it's there
// regardless of the the event buffer pointer passed to them.
// regardless of the event buffer pointer passed to them.
e = GetIPCEventReceiveBuffer();
g_pDbgTransport->GetNextEvent(e, CorDBIPC_BUFFER_SIZE);
#endif // !FEATURE_DBGIPC_TRANSPOPRT

View file

@ -1065,7 +1065,7 @@ public:
virtual
VMPTR_OBJECTHANDLE GetThreadObject(VMPTR_Thread vmThread) = 0;
//
// Get the allocation info corresponding to the specified thread.
//
@ -2660,7 +2660,7 @@ public:
HRESULT GetNativeCodeVersionNode(VMPTR_MethodDesc vmMethod, CORDB_ADDRESS codeStartAddress, OUT VMPTR_NativeCodeVersionNode* pVmNativeCodeVersionNode) = 0;
// Retrieves the ILCodeVersionNode for a given NativeCodeVersionNode.
// This may return a NULL node if the native code belongs to the default IL version for this this method.
// This may return a NULL node if the native code belongs to the default IL version for this method.
//
//
// Arguments:

View file

@ -1219,7 +1219,7 @@ lDone: ;
}
PAL_EXCEPT(EXCEPTION_EXECUTE_HANDLER)
{
//dbprintf("Exception occured manipulating .res file %S\n", szResFileName);
//dbprintf("Exception occurred manipulating .res file %S\n", szResFileName);
param.hr = HRESULT_FROM_WIN32(ERROR_RESOURCE_DATA_NOT_FOUND);
}
PAL_ENDTRY

View file

@ -57,7 +57,7 @@ struct VirtualReserveFlags
};
// An event is a synchronization object whose state can be set and reset
// indicating that an event has occured. It is used pervasively throughout
// indicating that an event has occurred. It is used pervasively throughout
// the GC.
//
// Note that GCEvent deliberately leaks its contents by not having a non-trivial destructor.
@ -81,7 +81,7 @@ public:
// is a logic error.
void CloseEvent();
// "Sets" the event, indicating that a particular event has occured. May
// "Sets" the event, indicating that a particular event has occurred. May
// wake up other threads waiting on this event. Depending on whether or
// not this event is an auto-reset event, the state of the event may
// or may not be automatically reset after Set is called.

View file

@ -207,7 +207,7 @@ T VolatileLoad(Volatile<T> const * pt)
}
//
// VolatileStore stores a T into the target of a pointer to T. Is is guaranteed that this store will
// VolatileStore stores a T into the target of a pointer to T. It is guaranteed that this store will
// not be optimized away by the compiler, and that any operation that occurs before this store, in program
// order, will not be moved after this store. In general, it is not guaranteed that the store will be
// atomic, though this is the case for most aligned scalar data types. If you need atomic loads or stores,

View file

@ -15041,7 +15041,7 @@ void allocator::copy_from_alloc_list (alloc_list* fromalist)
if (repair_list)
{
//repair the the list
//repair the list
//new items may have been added during the plan phase
//items may have been unlinked.
uint8_t* free_item = alloc_list_head_of (i);

View file

@ -123,7 +123,7 @@ enum failure_get_memory
fgm_commit_table = 5
};
// A record of the last OOM that occured in the GC, with some
// A record of the last OOM that occurred in the GC, with some
// additional information as to what triggered the OOM.
struct oom_history
{

View file

@ -612,7 +612,7 @@ public:
// Gets memory related information the last GC observed. Depending on the last arg, this could
// be any last GC that got recorded, or of the kind specified by this arg. All info below is
// what was observed by that last GC.
//
//
// highMemLoadThreshold - physical memory load (in percentage) when GC will start to
// react aggressively to reclaim memory.
// totalPhysicalMem - the total amount of phyiscal memory available on the machine and the memory
@ -621,7 +621,7 @@ public:
// lastRecordedHeapSizeBytes - total managed heap size.
// lastRecordedFragmentation - total fragmentation in the managed heap.
// totalCommittedBytes - total committed bytes by the managed heap.
// promotedBytes - promoted bytes.
// promotedBytes - promoted bytes.
// pinnedObjectCount - # of pinned objects observed.
// finalizationPendingCount - # of objects ready for finalization.
// index - the index of the GC.
@ -741,8 +741,8 @@ public:
// Returns whether or not a GC is in progress.
virtual bool IsGCInProgressHelper(bool bConsiderGCStart = false) = 0;
// Returns the number of GCs that have occured. Mainly used for
// sanity checks asserting that a GC has not occured.
// Returns the number of GCs that have occurred. Mainly used for
// sanity checks asserting that a GC has not occurred.
virtual unsigned GetGcCount() = 0;
// Gets whether or not the home heap of this alloc context matches the heap
@ -785,11 +785,11 @@ public:
============================================================================
*/
// Get the timestamp corresponding to the last GC that occured for the
// Get the timestamp corresponding to the last GC that occurred for the
// given generation.
virtual size_t GetLastGCStartTime(int generation) = 0;
// Gets the duration of the last GC that occured for the given generation.
// Gets the duration of the last GC that occurred for the given generation.
virtual size_t GetLastGCDuration(int generation) = 0;
// Gets a timestamp for the current moment in time.

View file

@ -992,7 +992,7 @@ BOOL Assembler::EmitField(FieldDescriptor* pFD)
}
}
//--------------------------------------------------------------------------------
// Set the the RVA to a dummy value. later it will be fixed
// Set the RVA to a dummy value. later it will be fixed
// up to be something correct, but if we don't emit something
// the size of the meta-data will not be correct
if (pFD->m_rvaLabel)

View file

@ -69,7 +69,7 @@ HANDLE ClrGetProcessExecutableHeap();
extern int RFS_HashStack();
#endif
// Critical section support for CLR DLLs other than the the EE.
// Critical section support for CLR DLLs other than the EE.
// Include the header defining each Crst type and its corresponding level (relative rank). This is
// auto-generated from a tool that takes a high-level description of each Crst type and its dependencies.
#include "crsttypes.h"

View file

@ -2364,7 +2364,7 @@ interface ICorDebugAppDomain4 : IUnknown
/* ------------------------------------------------------------------------- *
* Assembly interface
* An ICorDebugAssembly instance corresponds to a a managed assembly loaded
* An ICorDebugAssembly instance corresponds to a managed assembly loaded
* into a specific AppDomain in the CLR. For assemblies shared between multiple
* AppDomains (eg. CoreLib), there will be a separate ICorDebugAssembly instance
* per AppDomain in which it is used.
@ -6637,7 +6637,7 @@ interface ICorDebugObjectValue : ICorDebugValue
HRESULT GetContext([out] ICorDebugContext **ppContext);
/*
* IsValueClass returns true if the the class of this object is
* IsValueClass returns true if the class of this object is
* a value class.
*/

View file

@ -872,7 +872,7 @@
<HRESULT NumericValue="0x80131342">
<SymbolicName>CORDBG_E_ENC_HANGING_FIELD</SymbolicName>
<Message>"The field was added via Edit and Continue after the class was loaded."</Message>
<Comment> The field was added via EnC after the class was loaded, and so instead of the the field being contiguous with the other fields, it's 'hanging' off the instance or type. This error is used to indicate that either the storage for this field is not yet available and so the field value cannot be read, or the debugger needs to use an EnC specific code path to get the value.</Comment>
<Comment> The field was added via EnC after the class was loaded, and so instead of the field being contiguous with the other fields, it's 'hanging' off the instance or type. This error is used to indicate that either the storage for this field is not yet available and so the field value cannot be read, or the debugger needs to use an EnC specific code path to get the value.</Comment>
</HRESULT>
<HRESULT NumericValue="0x80131343">

View file

@ -1353,7 +1353,7 @@ interface ICorProfilerCallback : IUnknown
/*
* The CLR calls RemotingClientSendingMessage to notify the profiler that
* a remoting call is requiring the the caller to send an invocation request through
* a remoting call is requiring the caller to send an invocation request through
* a remoting channel.
*
* pCookie - if remoting GUID cookies are active, this value will correspond with the
@ -2507,7 +2507,7 @@ interface ICorProfilerCallback7 : ICorProfilerCallback6
// in-memory module is updated. Even when symbols are provided up-front in
// a call to the managed API Assembly.Load(byte[], byte[], ...) the runtime
// may not actually associate the symbolic data with the module until after
// the ModuleLoadFinished callback has occured. This event provides a later
// the ModuleLoadFinished callback has occurred. This event provides a later
// opportunity to collect symbols for such modules.
//
// This event is controlled by the COR_PRF_HIGH_IN_MEMORY_SYMBOLS_UPDATED
@ -4080,7 +4080,7 @@ interface ICorProfilerInfo9 : ICorProfilerInfo8
//Given the native code start address, return the native->IL mapping information for this jitted version of the code
HRESULT GetILToNativeMapping3(UINT_PTR pNativeCodeStartAddress, ULONG32 cMap, ULONG32 *pcMap, COR_DEBUG_IL_TO_NATIVE_MAP map[]);
//Given the native code start address, return the the blocks of virtual memory that store this code (method code is not necessarily stored in a single contiguous memory region)
//Given the native code start address, return the blocks of virtual memory that store this code (method code is not necessarily stored in a single contiguous memory region)
HRESULT GetCodeInfo4(UINT_PTR pNativeCodeStartAddress, ULONG32 cCodeInfos, ULONG32* pcCodeInfos, COR_PRF_CODE_INFO codeInfos[]);
};

View file

@ -1611,7 +1611,7 @@ public:
return DacGlobalBase() + *m_rvaPtr;
}
// This is only testing the the pointer memory is available but does not verify
// This is only testing the pointer memory is available but does not verify
// the memory that it points to.
//
bool IsValidPtr(void) const

View file

@ -570,7 +570,7 @@ public:
// (NOTE: RegexIterator and InputIterator are often typedef'ed to be the same thing.)
// 3. "Item" typedef.
// This will be used with methods GetItem and MatchItem (see below). Item must
// define the the following methods:
// define the following methods:
// ItemType GetType() : returns the type of the item. See below for explanation of ItemType
// const RegexIterator& GetNext() : iterator pointing to the start of the next item.
// 4. "MatchFlags" typedef, and "static const DefaultMatchFlags" value.

View file

@ -34,7 +34,7 @@ consistency's sake.
- DON'T FORGET CONTRACTS: Most of these APIs will likely be Throws/GC_Notrigger.
Also use PRECONDITIONs + POSTCONDITIONS when possible.
- SIGNATURES: Keep the method signture as close the the original win32 API as possible.
- SIGNATURES: Keep the method signture as close the original win32 API as possible.
- Preserve the return type + value. (except allow it to throw on oom). If the return value
should be a holder, then use that as an out-parameter at the end of the argument list.
We don't want to return holders because that will cause the dtors to be called.

View file

@ -192,7 +192,7 @@ private:
// Set this string to the UTF8 character
void SetUTF8(CHAR character);
// This this string to the given literal. We share the mem and don't make a copy.
// Set this string to the given literal. We share the mem and don't make a copy.
void SetLiteral(const CHAR *literal);
void SetLiteral(const WCHAR *literal);

View file

@ -3952,7 +3952,7 @@ LPWSTR *SegmentCommandLine(LPCWSTR lpCmdLine, DWORD *pNumArgs);
//
// These accessors serve the purpose of retrieving information from the
// TEB in a manner that ensures that the current fiber will not switch
// threads while the access is occuring.
// threads while the access is occurring.
//
class ClrTeb
{

View file

@ -217,7 +217,7 @@ T VolatileLoad(Volatile<T> const * pt)
}
//
// VolatileStore stores a T into the target of a pointer to T. Is is guaranteed that this store will
// VolatileStore stores a T into the target of a pointer to T. It is guaranteed that this store will
// not be optimized away by the compiler, and that any operation that occurs before this store, in program
// order, will not be moved after this store. In general, it is not guaranteed that the store will be
// atomic, though this is the case for most aligned scalar data types. If you need atomic loads or stores,

View file

@ -197,7 +197,7 @@ struct allMemoryKinds
// BB2 of the corresponding handler to be an "EH successor" of BB1. Because we
// make the conservative assumption that control flow can jump from a try block
// to its handler at any time, the immediate (regular control flow)
// predecessor(s) of the the first block of a try block are also considered to
// predecessor(s) of the first block of a try block are also considered to
// have the first block of the handler as an EH successor. This makes variables that
// are "live-in" to the handler become "live-out" for these try-predecessor block,
// so that they become live-in to the try -- which we require.

View file

@ -854,7 +854,7 @@ void CodeGen::genSaveCalleeSavedRegisterGroup(regMaskTP regsMask, int spDelta, i
// to high addresses. This means that integer registers are saved at lower addresses than floatint-point/SIMD
// registers. However, when genSaveFpLrWithAllCalleeSavedRegisters is true, the integer registers are stored
// at higher addresses than floating-point/SIMD registers, that is, the relative order of these two classes
// is reveresed. This is done to put the saved frame pointer very high in the frame, for simplicity.
// is reversed. This is done to put the saved frame pointer very high in the frame, for simplicity.
//
// TODO: We could always put integer registers at the higher addresses, if desired, to remove this special
// case. It would cause many asm diffs when first implemented.

View file

@ -974,7 +974,7 @@ void CodeGen::genPutArgStk(GenTreePutArgStk* treeNode)
}
// If we have an HFA we can't have any GC pointers,
// if not then the max size for the the struct is 16 bytes
// if not then the max size for the struct is 16 bytes
if (isHfa)
{
noway_assert(!layout->HasGCPtr());

View file

@ -3006,7 +3006,7 @@ void CodeGen::genFnPrologCalleeRegArgs(regNumber xtraReg, bool* pXtraRegClobbere
// When we have a promoted struct we have two possible LclVars that can represent the incoming argument
// in the regArgTab[], either the original TYP_STRUCT argument or the introduced lvStructField.
// We will use the lvStructField if we have a TYPE_INDEPENDENT promoted struct field otherwise
// use the the original TYP_STRUCT argument.
// use the original TYP_STRUCT argument.
//
if (varDsc->lvPromoted || varDsc->lvIsStructField)
{
@ -6658,7 +6658,7 @@ unsigned Compiler::GetHfaCount(CORINFO_CLASS_HANDLE hClass)
//
// Note:
// On x64 Windows the caller always creates slots (homing space) in its frame for the
// first 4 arguments of a callee (register passed args). So, the the variable number
// first 4 arguments of a callee (register passed args). So, the variable number
// (lclNum) for the first argument with a stack slot is always 0.
// For System V systems or armarch, there is no such calling convention requirement, and the code
// needs to find the first stack passed argument from the caller. This is done by iterating over
@ -8569,7 +8569,7 @@ void CodeGenInterface::VariableLiveKeeper::VariableLiveRange::dumpVariableLiveRa
// LiveRangeDumper
//------------------------------------------------------------------------
//------------------------------------------------------------------------
// resetDumper: If the the "liveRange" has its last "VariableLiveRange" closed, it makes
// resetDumper: If the "liveRange" has its last "VariableLiveRange" closed, it makes
// the "LiveRangeDumper" points to end of "liveRange" (nullptr). In other case,
// it makes the "LiveRangeDumper" points to the last "VariableLiveRange" of
// "liveRange", which is opened.

View file

@ -566,7 +566,7 @@ void CodeGen::genSaveCalleeSavedRegisterGroup(regMaskTP regsMask, int spDelta, i
// The caller can tell us to fold in a stack pointer adjustment, which we will do with the first instruction.
// Note that the stack pointer adjustment must be by a multiple of 16 to preserve the invariant that the
// stack pointer is always 16 byte aligned. If we are saving an odd number of callee-saved
// registers, though, we will have an empty aligment slot somewhere. It turns out we will put
// registers, though, we will have an empty alignment slot somewhere. It turns out we will put
// it below (at a lower address) the callee-saved registers, as that is currently how we
// do frame layout. This means that the first stack offset will be 8 and the stack pointer
// adjustment must be done by a SUB, and not folded in to a pre-indexed store.
@ -9049,7 +9049,7 @@ void CodeGen::genFnPrologCalleeRegArgs()
// When we have a promoted struct we have two possible LclVars that can represent the incoming argument
// in the regArgTab[], either the original TYP_STRUCT argument or the introduced lvStructField.
// We will use the lvStructField if we have a TYPE_INDEPENDENT promoted struct field otherwise
// use the the original TYP_STRUCT argument.
// use the original TYP_STRUCT argument.
//
if (varDsc->lvPromoted || varDsc->lvIsStructField)
{

View file

@ -457,7 +457,7 @@ bool Compiler::isNativePrimitiveStructType(CORINFO_CLASS_HANDLE clsHnd)
//-----------------------------------------------------------------------------
// getPrimitiveTypeForStruct:
// Get the "primitive" type that is is used for a struct
// Get the "primitive" type that is used for a struct
// of size 'structSize'.
// We examine 'clsHnd' to check the GC layout of the struct and
// return TYP_REF for structs that simply wrap an object.
@ -5979,7 +5979,7 @@ void Compiler::compCompileFinish()
if ((info.compILCodeSize <= 32) && // Is it a reasonably small method?
(info.compNativeCodeSize < 512) && // Some trivial methods generate huge native code. eg. pushing a single huge
// struct
(impInlinedCodeSize <= 128) && // Is the the inlining reasonably bounded?
(impInlinedCodeSize <= 128) && // Is the inlining reasonably bounded?
// Small methods cannot meaningfully have a big number of locals
// or arguments. We always track arguments at the start of
// the prolog which requires memory

View file

@ -1845,7 +1845,7 @@ public:
bool verboseSsa; // If true, produce especially verbose dump output in SSA construction.
bool shouldUseVerboseSsa();
bool treesBeforeAfterMorph; // If true, print trees before/after morphing (paired by an intra-compilation id:
int morphNum; // This counts the the trees that have been morphed, allowing us to label each uniquely.
int morphNum; // This counts the trees that have been morphed, allowing us to label each uniquely.
bool doExtraSuperPmiQueries;
void makeExtraStructQueries(CORINFO_CLASS_HANDLE structHandle, int level); // Make queries recursively 'level' deep.
@ -3071,7 +3071,7 @@ public:
#ifdef JIT32_GCENCODER
unsigned lvaLocAllocSPvar; // variable which stores the value of ESP after the the last alloca/localloc
unsigned lvaLocAllocSPvar; // variable which stores the value of ESP after the last alloca/localloc
#endif // JIT32_GCENCODER
@ -4899,7 +4899,7 @@ public:
SPK_ByReference
}; // The struct is passed/returned by reference to a copy/buffer.
// Get the "primitive" type that is is used when we are given a struct of size 'structSize'.
// Get the "primitive" type that is used when we are given a struct of size 'structSize'.
// For pointer sized structs the 'clsHnd' is used to determine if the struct contains GC ref.
// A "primitive" type is one of the scalar types: byte, short, int, long, ref, float, double
// If we can't or shouldn't use a "primitive" type then TYP_UNKNOWN is returned.
@ -8489,10 +8489,10 @@ private:
return sizeBytes;
}
// Get the the number of elements of baseType of SIMD vector given by its size and baseType
// Get the number of elements of baseType of SIMD vector given by its size and baseType
static int getSIMDVectorLength(unsigned simdSize, var_types baseType);
// Get the the number of elements of baseType of SIMD vector given by its type handle
// Get the number of elements of baseType of SIMD vector given by its type handle
int getSIMDVectorLength(CORINFO_CLASS_HANDLE typeHnd);
// Get preferred alignment of SIMD type.
@ -10430,7 +10430,7 @@ public:
// "op1" or its components is augmented by appending "fieldSeq". In practice, if "op1" is a GT_LCL_FLD, it has
// a field sequence as a member; otherwise, it may be the addition of an a byref and a constant, where the const
// has a field sequence -- in this case "fieldSeq" is appended to that of the constant; otherwise, we
// record the the field sequence using the ZeroOffsetFieldMap described above.
// record the field sequence using the ZeroOffsetFieldMap described above.
//
// One exception above is that "op1" is a node of type "TYP_REF" where "op1" is a GT_LCL_VAR.
// This happens when System.Object vtable pointer is a regular field at offset 0 in System.Private.CoreLib in

View file

@ -781,7 +781,7 @@ void Compiler::eeGetVars()
/* If extendOthers is set, then assume the scope of unreported vars
is the entire method. Note that this will cause fgExtendDbgLifetimes()
to zero-initalize all of them. This will be expensive if it's used
to zero-initialize all of them. This will be expensive if it's used
for too many variables.
*/
if (extendOthers)

View file

@ -3272,7 +3272,7 @@ emitter::instrDesc* emitter::emitNewInstrCallInd(int argCnt,
assert(id->idAddr()->iiaAddrMode.amDisp == disp);
#endif // TARGET_XARCH
/* Save the the live GC registers in the unused register fields */
/* Save the live GC registers in the unused register fields */
emitEncodeCallGCregs(gcrefRegs, id);
return id;
@ -3344,7 +3344,7 @@ emitter::instrDesc* emitter::emitNewInstrCallDir(int argCnt,
/* Make sure we didn't waste space unexpectedly */
assert(!id->idIsLargeCns());
/* Save the the live GC registers in the unused register fields */
/* Save the live GC registers in the unused register fields */
emitEncodeCallGCregs(gcrefRegs, id);
return id;

View file

@ -270,7 +270,7 @@ struct insGroup
#define IGF_FUNCLET_PROLOG 0x0008 // this group belongs to a funclet prolog
#define IGF_FUNCLET_EPILOG 0x0010 // this group belongs to a funclet epilog.
#define IGF_EPILOG 0x0020 // this group belongs to a main function epilog
#define IGF_NOGCINTERRUPT 0x0040 // this IG is is a no-interrupt region (prolog, epilog, etc.)
#define IGF_NOGCINTERRUPT 0x0040 // this IG is in a no-interrupt region (prolog, epilog, etc.)
#define IGF_UPD_ISZ 0x0080 // some instruction sizes updated
#define IGF_PLACEHOLDER 0x0100 // this is a placeholder group, to be filled in later
#define IGF_EXTEND 0x0200 // this block is conceptually an extension of the previous block
@ -2135,7 +2135,7 @@ private:
#endif
// Terminates any in-progress instruction group, making the current IG a new empty one.
// Mark this instruction group as having a label; return the the new instruction group.
// Mark this instruction group as having a label; return the new instruction group.
// Sets the emitter's record of the currently live GC variables
// and registers. The "isFinallyTarget" parameter indicates that the current location is
// the start of a basic block that is returned to after a finally clause in non-exceptional execution.

View file

@ -1967,7 +1967,7 @@ inline emitter::code_t emitter::insEncodeRRIb(instruction ins, regNumber reg, em
/*****************************************************************************
*
* Returns the "+reg" opcode with the the given register set into the low
* Returns the "+reg" opcode with the given register set into the low
* nibble of the opcode
*/
@ -3499,7 +3499,7 @@ regNumber emitter::emitInsBinary(instruction ins, emitAttr attr, GenTree* dst, G
// * Local variable
//
// Most of these types (except Indirect: Class variable and Indirect: Addressing mode)
// give us a a local variable number and an offset and access memory on the stack
// give us a local variable number and an offset and access memory on the stack
//
// Indirect: Class variable is used for access static class variables and gives us a handle
// to the memory location we read from

View file

@ -549,7 +549,7 @@ void emitIns_Call(EmitCallType callType,
// Is the last instruction emitted a call instruction?
bool emitIsLastInsCall();
// Insert a NOP at the end of the the current instruction group if the last emitted instruction was a 'call',
// Insert a NOP at the end of the current instruction group if the last emitted instruction was a 'call',
// because the next instruction group will be an epilog.
void emitOutputPreEpilogNOP();
#endif // TARGET_AMD64

View file

@ -5457,7 +5457,7 @@ BasicBlock* Compiler::fgRelocateEHRange(unsigned regionIndex, FG_RELOCATE_TYPE r
// 4. A and X share the 'last' block. There are two sub-cases:
// (a) A is a larger range than X (such that the beginning of A precedes the
// beginning of X): in this case, we are moving the tail of A. We set the
// 'last' block of A to the the block preceding the beginning block of X.
// 'last' block of A to the block preceding the beginning block of X.
// (b) A is a smaller range than X. Thus, we are moving the entirety of A along
// with X. In this case, nothing in the EH record for A needs to change.
// 5. A and X share the 'beginning' block (but aren't the same range, as in #3).

View file

@ -1022,7 +1022,7 @@ PhaseStatus Compiler::fgCloneFinally()
// try { } catch { } finally { }
//
// will have two call finally blocks, one for the normal exit
// from the try, and the the other for the exit from the
// from the try, and the other for the exit from the
// catch. They'll both pass the same return point which is the
// statement after the finally, so they can share the clone.
//

View file

@ -5182,7 +5182,7 @@ bool Compiler::fgReorderBlocks()
}
// Set connected_bDest to true if moving blocks [bStart .. bEnd]
// connects with the the jump dest of bPrev (i.e bDest) and
// connects with the jump dest of bPrev (i.e bDest) and
// thus allows bPrev fall through instead of jump.
if (bNext == bDest)
{
@ -6010,7 +6010,7 @@ bool Compiler::fgUpdateFlowGraph(bool doTailDuplication)
//
if (fgIsUsingProfileWeights())
{
// if block and bdest are in different hot/cold regions we can't do this this optimization
// if block and bdest are in different hot/cold regions we can't do this optimization
// because we can't allow fall-through into the cold region.
if (!fgEdgeWeightsComputed || fgInDifferentRegions(block, bDest))
{

View file

@ -1305,7 +1305,7 @@ void EfficientEdgeCountInstrumentor::BuildSchemaElements(BasicBlock* block, Sche
assert(probe->schemaIndex == -1);
probe->schemaIndex = (int)schema.size();
// Normally we use the the offset of the block in the schema, but for certain
// Normally we use the offset of the block in the schema, but for certain
// blocks we do not have any information we can use and need to use internal BB numbers.
//
int32_t sourceKey = EfficientEdgeCountBlockToKey(block);

View file

@ -2071,7 +2071,7 @@ unsigned PendingArgsStack::pasEnumGCoffsCount()
}
//-----------------------------------------------------------------------------
// Initalize enumeration by passing in iter=pasENUM_START.
// Initialize enumeration by passing in iter=pasENUM_START.
// Continue by passing in the return value as the new value of iter
// End of enumeration when pasENUM_END is returned
// If return value != pasENUM_END, *offs is set to the offset for GCinfo
@ -4124,7 +4124,7 @@ void GCInfo::gcMakeRegPtrTable(
// Do we have an argument or local variable?
if (!varDsc->lvIsParam)
{
// If is is pinned, it must be an untracked local.
// If it is pinned, it must be an untracked local.
assert(!varDsc->lvPinned || !varDsc->lvTracked);
if (varDsc->lvTracked || !varDsc->lvOnFrame)

View file

@ -11342,7 +11342,7 @@ void Compiler::gtDispTree(GenTree* tree,
if (IsUninitialized(tree))
{
/* Value used to initalize nodes */
/* Value used to initialize nodes */
printf("Uninitialized tree node!\n");
return;
}
@ -12492,7 +12492,7 @@ GenTree* Compiler::gtFoldExprCall(GenTreeCall* call)
// An alternative tree if folding happens.
//
// Notes:
// If either operand is known to be a a RuntimeType, then the type
// If either operand is known to be a RuntimeType, then the type
// equality methods will simply check object identity and so we can
// fold the call into a simple compare of the call's operands.
@ -13430,7 +13430,7 @@ GenTree* Compiler::gtFoldBoxNullable(GenTree* tree)
// This can be useful when the only part of the box that is "live"
// is its type.
//
// If removal fails, is is possible that a subsequent pass may be
// If removal fails, it is possible that a subsequent pass may be
// able to optimize. Blocking side effects may now be minimized
// (null or bounds checks might have been removed) or might be
// better known (inline return placeholder updated with the actual

View file

@ -8046,7 +8046,7 @@ inline bool GenTree::IsIntegralConst(ssize_t constVal) const
}
//-------------------------------------------------------------------
// IsIntegralConstVector: returns true if this this is a SIMD vector
// IsIntegralConstVector: returns true if this is an SIMD vector
// with all its elements equal to an integral constant.
//
// Arguments:
@ -8105,8 +8105,8 @@ inline bool GenTree::IsIntegralConstVector(ssize_t constVal) const
}
//-------------------------------------------------------------------
// IsSIMDZero: returns true if this this is a SIMD vector
// with all its elements equal to zero.
// IsSIMDZero: returns true if this is an SIMD vector with all its
// elements equal to zero.
//
// Returns:
// True if this represents an integral const SIMD vector.

View file

@ -1684,7 +1684,7 @@ void hashBv::InorderTraverse(nodeAction n)
{
// keep an array of the current pointers
// into each of the the bitvector lists
// into each of the bitvector lists
// in the hashtable
for (int i = 0; i < hts; i++)
{

View file

@ -430,7 +430,7 @@ bool HWIntrinsicInfo::isImmOp(NamedIntrinsic id, const GenTree* op)
// argType -- the required type of argument
// argClass -- the class handle of argType
// expectAddr -- if true indicates we are expecting type stack entry to be a TYP_BYREF.
// newobjThis -- For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitalized object.
// newobjThis -- For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitialized object.
//
// Return Value:
// the validated argument

View file

@ -8636,7 +8636,7 @@ bool Compiler::impIsImplicitTailCallCandidate(
// opcode - opcode that inspires the call
// pResolvedToken - resolved token for the call target
// pConstrainedResolvedToken - resolved constraint token (or nullptr)
// newObjThis - tree for this pointer or uninitalized newobj temp (or nullptr)
// newObjThis - tree for this pointer or uninitialized newobj temp (or nullptr)
// prefixFlags - IL prefix flags for the call
// callInfo - EE supplied info for the call
// rawILOffset - IL offset of the opcode, used for guarded devirtualization.
@ -8651,7 +8651,7 @@ bool Compiler::impIsImplicitTailCallCandidate(
// opcode can be CEE_CALL, CEE_CALLI, CEE_CALLVIRT, or CEE_NEWOBJ.
//
// For CEE_NEWOBJ, newobjThis should be the temp grabbed for the allocated
// uninitalized object.
// uninitialized object.
#ifdef _PREFAST_
#pragma warning(push)
@ -10159,7 +10159,7 @@ var_types Compiler::impImportJitTestLabelMark(int numArgs)
{
// A loop hoist annotation with value >= 100 means that the expression should be a static field access,
// a GT_IND of a static field address, which should be the sum of a (hoistable) helper call and possibly some
// offset within the the static field block whose address is returned by the helper call.
// offset within the static field block whose address is returned by the helper call.
// The annotation is saying that this address calculation, but not the entire access, should be hoisted.
assert(node->OperGet() == GT_IND);
tlAndN.m_num -= 100;
@ -10407,7 +10407,7 @@ GenTree* Compiler::impFixupStructReturnType(GenTree* op,
// really have a return buffer, but instead use it as a way
// to keep the trees cleaner with fewer address-taken temps.
//
// Well now we have to materialize the the return buffer as
// Well now we have to materialize the return buffer as
// an address-taken temp. Then we can return the temp.
//
// NOTE: this code assumes that since the call directly

View file

@ -872,7 +872,7 @@ private:
}
// Finally, rewire the cold block to jump to the else block,
// not fall through to the the check block.
// not fall through to the check block.
//
coldBlock->bbJumpKind = BBJ_ALWAYS;
coldBlock->bbJumpDest = elseBlock;

View file

@ -4518,7 +4518,7 @@ void Compiler::fgExtendEHRegionAfter(BasicBlock* block)
// inserting the block and properly extending some EH regions (if necessary)
// puts the block in the correct region. We only consider the case of extending
// an EH region after 'blk' (that is, to include 'blk' and the newly insert block);
// we don't consider inserting a block as the the first block of an EH region following 'blk'.
// we don't consider inserting a block as the first block of an EH region following 'blk'.
//
// Consider this example:
//

View file

@ -23,7 +23,7 @@ class ClassLayout
const unsigned m_isValueClass : 1;
INDEBUG(unsigned m_gcPtrsInitialized : 1;)
// The number of GC pointers in this layout. Since the the maximum size is 2^32-1 the count
// The number of GC pointers in this layout. Since the maximum size is 2^32-1 the count
// can fit in at most 30 bits.
unsigned m_gcPtrCount : 30;

View file

@ -1223,7 +1223,7 @@ void Compiler::lvaInitUserArgs(InitVarDscInfo* varDscInfo, unsigned skipArgs, un
// If we needed to use the stack in order to pass this argument then
// record the fact that we have used up any remaining registers of this 'type'
// This prevents any 'backfilling' from occuring on ARM64/LoongArch64.
// This prevents any 'backfilling' from occurring on ARM64/LoongArch64.
//
varDscInfo->setAllRegArgUsed(argType);
@ -2263,7 +2263,7 @@ bool Compiler::StructPromotionHelper::ShouldPromoteStructVar(unsigned lclNum)
//
// If the lvRefCnt is zero and we have a struct promoted parameter we can end up with an extra store of
// the the incoming register into the stack frame slot.
// the incoming register into the stack frame slot.
// In that case, we would like to avoid promortion.
// However we haven't yet computed the lvRefCnt values so we can't do that.
//

View file

@ -1183,7 +1183,7 @@ LIR::ReadOnlyRange LIR::Range::GetMarkedRange(unsigned markCount,
return GenTree::VisitResult::Continue;
});
// Unmark the the node and update `firstNode`
// Unmark the node and update `firstNode`
firstNode->gtLIRFlags &= ~LIR::Flags::Mark;
markCount--;
}

View file

@ -2294,7 +2294,7 @@ bool Compiler::optExtractArrIndex(GenTree* tree, ArrIndex* result, unsigned lhsN
result->rank++;
// If the array element type (saved from the GT_INDEX node during morphing) is anything but
// TYP_REF, then it must the the final level of jagged array.
// TYP_REF, then it must the final level of jagged array.
assert(arrBndsChk->gtInxType != TYP_VOID);
*topLevelIsFinal = (arrBndsChk->gtInxType != TYP_REF);

View file

@ -2738,7 +2738,7 @@ bool LinearScan::isMatchingConstant(RegRecord* physRegRecord, RefPosition* refPo
// To select a ref position for spilling.
// - If refPosition->RegOptional() == false
// The RefPosition chosen for spilling will be the lowest weight
// of all and if there is is more than one ref position with the
// of all and if there is more than one ref position with the
// same lowest weight, among them choses the one with farthest
// distance to its next reference.
//

View file

@ -2234,7 +2234,7 @@ public:
// spillAfter indicates that the value is spilled here, so a spill must be added.
// singleDefSpill indicates that it is associated with a single-def var and if it
// is decided to get spilled, it will be spilled at firstRefPosition def. That
// way, the the value of stack will always be up-to-date and no more spills or
// way, the value of stack will always be up-to-date and no more spills or
// resolutions (from reg to stack) will be needed for such single-def var.
// copyReg indicates that the value needs to be copied to a specific register,
// but that it will also retain its current assigned register.

View file

@ -963,7 +963,7 @@ void CallArgs::ArgsComplete(Compiler* comp, GenTreeCall* call)
if (argObj->AsObj()->gtOp1->IsLocalAddrExpr() == nullptr) // Is the source not a LclVar?
{
// If we don't have a LclVar we need to read exactly 3,5,6 or 7 bytes
// For now we use a a GT_CPBLK to copy the exact size into a GT_LCL_VAR temp.
// For now we use a GT_CPBLK to copy the exact size into a GT_LCL_VAR temp.
//
SetNeedsTemp(&arg);
}
@ -12923,7 +12923,7 @@ GenTree* Compiler::fgOptimizeRelationalComparisonWithConst(GenTreeOp* cmp)
// node - HWIntrinsic node to examine
//
// Returns:
// The original node if no optimization happened or if tree bashing occured.
// The original node if no optimization happened or if tree bashing occurred.
// An alternative tree if an optimization happened.
//
// Notes:

View file

@ -2312,7 +2312,7 @@ public:
If we are unable to enregister the CSE then the cse-use-cost is IND_COST
and the cse-def-cost is also IND_COST.
If we want to be conservative we use IND_COST as the the value
If we want to be conservative we use IND_COST as the value
for both cse-def-cost and cse-use-cost and then we never introduce
a CSE that could pessimize the execution time of the method.

View file

@ -6407,7 +6407,7 @@ void Compiler::optRecordLoopMemoryDependence(GenTree* tree, BasicBlock* block, V
updateLoopNum = updateParentLoopNum;
}
// If the update block is not the the header of a loop containing
// If the update block is not the header of a loop containing
// block, we can also ignore the update.
//
if (!optLoopContains(updateLoopNum, loopNum))

View file

@ -1486,7 +1486,7 @@ SIMDIntrinsicID Compiler::impSIMDRelOp(SIMDIntrinsicID relOpIntrinsicId,
//
// Arguments:
// opcode - the opcode being handled (needed to identify the CEE_NEWOBJ case)
// newobjThis - For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitalized object.
// newobjThis - For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitialized object.
// clsHnd - The handle of the class of the method.
//
// Return Value:
@ -1870,7 +1870,7 @@ void Compiler::impMarkContiguousSIMDFieldAssignments(Statement* stmt)
//
// Arguments:
// opcode - the opcode being handled (needed to identify the CEE_NEWOBJ case)
// newobjThis - For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitalized object.
// newobjThis - For CEE_NEWOBJ, this is the temp grabbed for the allocated uninitialized object.
// clsHnd - The handle of the class of the method.
// method - The handle of the method.
// sig - The call signature for the method.

View file

@ -706,7 +706,7 @@ C_ASSERT(sizeof(target_ssize_t) == TARGET_POINTER_SIZE);
#if defined(TARGET_X86)
// instrDescCns holds constant values for the emitter. The X86 compiler is unique in that it
// may represent relocated pointer values with these constants. On the 64bit to 32 bit
// cross-targetting jit, the the constant value must be represented as a 64bit value in order
// cross-targetting jit, the constant value must be represented as a 64bit value in order
// to represent these pointers.
typedef ssize_t cnsval_ssize_t;
typedef size_t cnsval_size_t;

View file

@ -15,7 +15,7 @@
#define CPBLK_UNROLL_LIMIT 64 // Upper bound to let the code generator to loop unroll CpBlk.
#define INITBLK_UNROLL_LIMIT 128 // Upper bound to let the code generator to loop unroll InitBlk.
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the the threshold of the number
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the threshold of the number
// of contiguous non-gc slots that trigger generating rep movsq instead of
// sequences of movsq instructions

View file

@ -15,7 +15,7 @@
#define CPBLK_UNROLL_LIMIT 64 // Upper bound to let the code generator to loop unroll CpBlk.
#define INITBLK_UNROLL_LIMIT 128 // Upper bound to let the code generator to loop unroll InitBlk.
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the the threshold of the number
#define CPOBJ_NONGC_SLOTS_LIMIT 4 // For CpObj code generation, this is the threshold of the number
// of contiguous non-gc slots that trigger generating rep movsq instead of
// sequences of movsq instructions

View file

@ -70,7 +70,7 @@ class UnwindEpilogInfo;
class UnwindFragmentInfo;
class UnwindInfo;
// UnwindBase: A base class shared by the the unwind classes that require
// UnwindBase: A base class shared by the unwind classes that require
// a Compiler* for memory allocation.
class UnwindBase
@ -90,7 +90,7 @@ protected:
Compiler* uwiComp;
};
// UnwindCodesBase: A base class shared by the the classes used to represent the prolog
// UnwindCodesBase: A base class shared by the classes used to represent the prolog
// and epilog unwind codes.
class UnwindCodesBase

View file

@ -7299,7 +7299,7 @@ void Compiler::fgValueNumber()
else if (info.compInitMem || varDsc->lvMustInit ||
VarSetOps::IsMember(this, fgFirstBB->bbLiveIn, varDsc->lvVarIndex))
{
// The last clause covers the use-before-def variables (the ones that are live-in to the the first block),
// The last clause covers the use-before-def variables (the ones that are live-in to the first block),
// these are variables that are read before being initialized (at least on some control flow paths)
// if they are not must-init, then they get VNF_InitVal(i), as with the param case.)

View file

@ -486,7 +486,7 @@ public:
#endif // FEATURE_SIMD
// Create or return the existimg value number representing a singleton exception set
// for the the exception value "x".
// for the exception value "x".
ValueNum VNExcSetSingleton(ValueNum x);
ValueNumPair VNPExcSetSingleton(ValueNumPair x);

View file

@ -1797,7 +1797,7 @@ ErrExit:
//*****************************************************************************
// return a pointer which points to meta data's internal string
// return the the type name in utf8
// return the type name in utf8
//*****************************************************************************
__checkReturn
HRESULT

View file

@ -936,7 +936,7 @@ HRESULT MDInternalRO::FindParamOfMethod(// S_OK or error.
//*****************************************************************************
// return a pointer which points to meta data's internal string
// return the the type name in utf8
// return the type name in utf8
//*****************************************************************************
__checkReturn
HRESULT

View file

@ -1592,7 +1592,7 @@ namespace Internal.Runtime
// This is a function pointer with the following signature IntPtr()(MethodTable* targetType, MethodTable* interfaceType, ushort slot)
private delegate*<MethodTable*, MethodTable*, ushort, IntPtr> _dynamicTypeSlotDispatchResolve;
// Starting address for the the binary module corresponding to this dynamic module.
// Starting address for the binary module corresponding to this dynamic module.
private delegate*<ExceptionIDs, Exception> _getRuntimeException;
#if TYPE_LOADER_IMPLEMENTATION

View file

@ -15,7 +15,7 @@
// * No use of runtime facilities that check whether a GC is in progress, these will deadlock. The big
// example we know about so far is making a p/invoke call.
// * For the AfterMarkPhase callout special attention must be paid to avoid any action that reads the MethodTable*
// from an object header (e.g. casting). At this point the GC may have mark bits set in the the pointer.
// from an object header (e.g. casting). At this point the GC may have mark bits set in the pointer.
//
class MethodTable;

View file

@ -318,7 +318,7 @@ NESTED_ENTRY RhpHijackForGcStress, _TEXT
lea rcx, [rsp + 20h + 6*10h + 2*8h] ;; address of PAL_LIMITED_CONTEXT
call THREAD__HIJACKFORGCSTRESS
;; Note: we only restore the scratch registers here. No GC has occured, so restoring
;; Note: we only restore the scratch registers here. No GC has occurred, so restoring
;; the callee saved ones is unnecessary.
mov rax, [rsp + 20h + 6*10h + 2*8h + OFFSETOF__PAL_LIMITED_CONTEXT__Rax]
mov rcx, [rsp + 20h + 6*10h + 0*8h]

View file

@ -34,7 +34,7 @@ LEAF_ENTRY RhpInterfaceDispatch\entries, _TEXT
CurrentOffset = CurrentOffset + 16
.endr
// r10 still contains the the indirection cell address.
// r10 still contains the indirection cell address.
jmp C_FUNC(RhpInterfaceDispatchSlow)
LEAF_END RhpInterfaceDispatch\entries, _TEXT

View file

@ -45,7 +45,7 @@ CurrentEntry = 0
CurrentEntry = CurrentEntry + 1
endm
;; r10 still contains the the indirection cell address.
;; r10 still contains the indirection cell address.
jmp RhpInterfaceDispatchSlow

Some files were not shown because too many files have changed in this diff Show more