...by avoiding `CreateListFromArrayLike` in cases when we could directly
use elements of underlying object's indexed properties storage.
Makes this program go 2.1x faster:
```js
function target(a, b, c) {
return a + b + c;
}
const args = [1, 2, 3];
let result = 0;
(function() {
for (let i = 0; i < 10_000_000; i++) {
result += target.apply(null, args);
}
})();
```
Before this change each built-in iterator object has a boolean
`m_next_method_was_redefined`. If user code later changed the iterator’s
prototype (e.g. `Object.setPrototypeOf()`), we still believed the
built-in fast-path was safe and skipped the user supplied override,
producing wrong results.
With this change
`BuiltinIterator::as_builtin_iterator_if_next_is_not_redefined()` looks
up the current `next` property and verifies that it is still the
built-in native function.
In the following example:
```js
const f = (i) => ({
obj: { a: { x: i }, b: { x: i } },
g: () => {},
});
```
The body of function `f` is initially parsed as an arrow function. As a
result, what is actually an object expression is interpreted as a formal
parameter with a binding pattern. Since duplicate identifiers are not
allowed in this context (`i` in the example), the parser generates an
error, causing the entire script to fail parsing.
This change ignores the "Duplicate parameter names in bindings" error
during arrow function parameter parsing, allowing the parser to continue
and recognize the object expression of the outer arrow function with an
implicit return.
Fixes error on https://chat.openai.com/
...when Array.prototype and Object.prototype are intact.
If `internal_set()` is called on an array exotic object with a numeric
PropertyKey, and:
- the prototype chain has not been modified (i.e., there are no getters
or setters for indexed properties), and
- the array is not the target of a Proxy object,
then we can directly store the value in the receiver's indexed
properties, without checking whether it already exists somewhere in the
prototype chain.
1.7x improvement on the following program:
```js
function f() {
let a = [];
let i = 0;
while (i < 10_000_000) {
a.push(i);
i++;
}
}
f();
```
Fixes a bug that reproduces with the following steps:
1. Create an object with a getter for property "a" in its prototype,
where the getter adds an "a" property to the object itself.
2. Call the "a" getter in a loop for the first time. This triggers
caching of metadata indicating that the "a" property is located in
the prototype chain.
3. Call the "a" getter in a loop for the second time. Oops, the cache
says the getter is in the prototype chain, but the object now
also has its own "a" property that was added by the first getter
call.
81b6a11 regressed correctness by always bypassing the `next()` method
resolution for built-in iterators, causing incorrect behavior when
`next()` was redefined on built-in prototypes. This change fixes the
issue by storing a flag on built-in prototypes indicating whether
`next()` has ever been redefined.
Instead of creating a second ExecutionContext in BoundFunction.[[Call]],
we now implement BoundFunction::get_stack_frame_size() and combine
information from the target + the bound arguments list.
This allows BoundFunction.[[Call]] to reuse the already-established
ExecutionContext for the callee.
1.20x speedup on MicroBench/bound-call-04-args.js
We were previously unable to use simdutf for base64 decoding operations
other than "loose". Upstream has added support for the "strict" and
"stop-before-partial" operations, so let's make use of them!
We cached the length identifier for GetLength, but not
GetLengthWithThis. This caused an `has_value()` verification failure
when accessing super.length. Found by Fuzzilli.
This prevents the variables declared inside a class static initializer
to escape to the nearest containing function causing all sorts of memory
corruptions.
Before, If the cache was empty we would try and evict non-existant
entries and crash. So the fix is to make sure that we don't saturate
the cache with a single parse result.
This works because at the end of the finally chunk, a
ContinuePendingUnwind is generated which copies the saved return value
register into the return value register. In cases where
ContinuePendingUnwind is not generated such as when there is a break
statement in the finally block, the fonction will return undefined which
is consistent with V8 and SpiderMonkey.
Before this change, we would enumerate all the keys with
[[OwnPropertyKeys]], and then do [[GetOwnPropertyDescriptor]] twice for
each key as we went through them.
We now only do one [[GetOwnPropertyDescriptor]] per key, which
drastically reduces the number of proxy traps when those are involved.
The new trap sequence matches what you get with V8, so I don't think
anyone will be unpleasantly surprised here.
Currently, we create `this_argument` with
`ordinary_create_from_constructor`, then we use `arguments_list` to
build the callee_context.
The issue is we don't properly model the side-effects of
`ordinary_create_from_constructor`, if `new_target` is a proxy object
then when we `get` the prototype, arbitrary javascript can run.
This javascript could perform a function call with enough arguments to
reallocate the interpreters m_argument_values_buffer vector. This is
dangerous and leads to a use-after-free, as our stack frame maintains a
pointer to m_argument_values_buffer (`arguments_list`).
This is a normative change in the ECMA-402 spec. See:
7508197
In our implementation, we don't have the affected AOs directly, as we
delegate to ICU. So instead, we must ensure we provide ICU a locale with
the relevant extension keys present.
If the regex always matches the input, even if it's past the end, then
we need to stop execution of the regex when it's past the end. This
corresponds to step 13.a and prevents it from infinitely looping.
Reduced from: d98672060f/packages/react-i18n/src/utilities/money.ts (L10-L14)
The new test case crashes during bytecode generation due to
`emit_super_reference` not correctly generating the reference record
for the property access.
The promise job's fulfillment / rejection handlers may push an execution
context onto the VM, which will throw an internal error if our ad-hoc
call stack size limit has been reached. Thus, we cannot blindly VERIFY
that the result of invoking these handlers is non-abrupt.
This patch will propagate any internal error forward, and retains the
condition that any other error type is not thrown.
The "with" statement is its own token (TokenType::With), and thus would
fail to parse as an identifier. We've already asserted that the token
we are parsing is "with" or "assert", so just consume it.