Before this change each built-in iterator object has a boolean
`m_next_method_was_redefined`. If user code later changed the iterator’s
prototype (e.g. `Object.setPrototypeOf()`), we still believed the
built-in fast-path was safe and skipped the user supplied override,
producing wrong results.
With this change
`BuiltinIterator::as_builtin_iterator_if_next_is_not_redefined()` looks
up the current `next` property and verifies that it is still the
built-in native function.
This mirrors the existing caching logic for int32 constants.
Avoids duplication of string constants in m_constants which could
result in stack overflows for large scripts with a lot of similar
strings.
This commit adds the minimal export macros needed to run js.exe on
windows. A followup commit is planned to move to explicit export
entirely.
A static_assert for the size of a struct is also ifdef'ed out as the
semantics around object layout and inheritance are different on MSVC abi
and the struct IteratorRecord ends up being 40 bytes not 32.
Fixes a bug that reproduces with the following steps:
1. Create an object with a getter for property "a" in its prototype,
where the getter adds an "a" property to the object itself.
2. Call the "a" getter in a loop for the first time. This triggers
caching of metadata indicating that the "a" property is located in
the prototype chain.
3. Call the "a" getter in a loop for the second time. Oops, the cache
says the getter is in the prototype chain, but the object now
also has its own "a" property that was added by the first getter
call.
- Avoids unnecessary conversions between StringOrSymbol and PropertyKey
on the hot path of property access.
- Simplifies the code by removing StringOrSymbol and using PropertyKey
directly. There was no reason to have a separate StringOrSymbol type
representing the same data as PropertyKey, just with the index key
stored as a string.
PropertyKey has been updated to use a tagged pointer instead of a
Variant, so it still occupies 8 bytes, same as StringOrSymbol.
12% improvement on JetStream/gcc-loops.cpp.js
12% improvement on MicroBench/object-assign.js
7% improvement on MicroBench/object-keys.js
By doing that we avoid lots of `PropertyKey` -> `Value` -> `PropertyKey`
transforms, which are quite expensive because of underlying
`FlyString` -> `PrimitiveString` -> `FlyString` conversions.
10% improvement on MicroBench/object-keys.js
This commit adds a fast path for putting values into a TypedArray of an
integer type, when the value being put in is a double. This leads to a
6% speedup on JetStream/gcc-loops.js.
We don't override anything with definitions of this function in
`SwitchStatement` and `LabelledStatement`. Also, we can make the
`IterationStatement` abstract, there is no need to add a fallback
error-generating stub implementation of this method.
81b6a11 regressed correctness by always bypassing the `next()` method
resolution for built-in iterators, causing incorrect behavior when
`next()` was redefined on built-in prototypes. This change fixes the
issue by storing a flag on built-in prototypes indicating whether
`next()` has ever been redefined.
https://tc39.es/ecma262/#sec-jobs specifies that we should only be
running queued promise jobs and host-defined cleanup when the
execution context stack is empty. It is asserted to _not_ be empty
the line above, so remove it.
No impact on test262 or our test suites, Interpreter::run_executable
is already (incorrectly) performing this unconditionally.
run_promise_jobs also happens to do nothing when LibJS is embedded
into LibWeb.
Instead of monomorphic (1 shape), GetById inline caches are now
polymorphic (4 shapes).
This improves inline cache hit rates greatly on most web JavaScript.
For example, Speedometer 2.1 sees 88% -> 97% cache hit rate improvement.
1.71x speedup on MicroBench/pic-get-own.js
1.82x speedup on MicroBench/pic-get-pchain.js
This is *extremely* common on the web, but barely shows up at all in
JavaScript benchmarks.
A typical example is setting Element.innerHTML on a HTMLDivElement.
HTMLDivElement doesn't have innerHTML, so it has to travel up the
prototype chain until it finds it.
Before this change, we didn't cache this at all, so we had to travel
the prototype chain every time a setter like this was used.
We now use the same mechanism we already had for GetBydId and cache
PutById setter accesses in the prototype chain as well.
1.74x speedup on MicroBench/setter-in-prototype-chain.js
`var` bindings are never in the temporal dead zone (TDZ), and so we
know accessing them will not throw.
We now take advantage of this by having a specialized environment
binding value getter that doesn't check for exceptional cases.
1.08x speedup on JetStream.
Before this change, setting a global would end up as SetLexicalBinding.
That instruction always failed to cache the access if the global was a
property of the global object.
1.14x speedup on Octane/earley-boyer.js
2.04x speedup on MicroBench/for-of.js
Note that MicroBench/for-of.js was more of a "set global" benchmark
before this. After this change, it's actually a for..of benchmark. :^)
We were spending a lot of time removing each property name from the
iterator's underlying HashMap while iterating over it. This wasn't
actually necessary, so let's stop doing it and instead just iterate
over the property names with a stored HashTable iterator.
1.10x speedup on MicroBench/for-in-indexed-properties.js
Before this change, we would call [[OwnPropertyKeys]] on the target
objects, then convert the returned keys from Value into PropertyKey.
Then, when actually iterating, we'd convert them back into Value again.
This was particularly costly for numeric property keys, since we had
to go through string-from-number construction.
Now, we simply keep the original values returned by [[OwnPropertyKeys]]
around and use them for the enumeration.
1.09x speedup on MicroBench/for-in-indexed-properties.js
1.01x speedup on MicroBench/for-in-named-properties.js
I was investigating an optimization in this area, and while it
didn't seem to have a noticable improvement, it still seems
useful to apply this change.
Even though this code was already optimized to re-use a single result
object, returning { value, done } directly in output parameters still
provides a substantial speedup.
1.21x speedup on MicroBench/for-in-indexed-properties.js
Apply a little ensure_capacity() to avoid excessive rehashing of the
property key table when enumerating a large number of properties.
1.23x speedup on MicroBench/for-in-indexed-properties.js
...by avoiding `{ value, done }` iterator result value allocation. This
change applies the same otimization 81b6a11 added for `for..in` and
`for..of`.
Makes following micro benchmark go 22% faster on my computer:
```js
function f() {
const arr = [];
for (let i = 0; i < 10_000_000; i++) {
arr.push([i]);
}
let sum = 0;
for (let [i] of arr) {
sum += i;
}
}
f();
```
Introduce special instruction for `for..of` and `for..in` loop that
skips `{ value, done }` result object allocation if iterator is builtin
(array, map, set, string). This reduces GC pressure significantly and
avoids extracting the `value` and `done` properties.
This change makes this micro benchmark 48% faster on my computer:
```js
const arr = new Array(10_000_000);
let counter = 0;
for (let _ of arr) {
counter++;
}
```
This reverts commit 36bb2824a6.
Although this was faster on my M3 MacBook Pro, other Apple machines
disagree, including our benchmark runner. So let's revert it.
This is a simple trick to generate better native code for access to
registers, locals, and constants. Before this change, each access had
to first dereference the member pointer in Interpreter, and then get to
the values. Now we always have a pointer directly to the values on hand.
Here's how it looks:
class StackFrame {
public:
Value get(Operand) const;
void set(Operand, Value);
private:
Value m_values[];
};
And we just place one of these as a window on top of the execution
context's array of values (registers, locals, and constants).
Getting the running_execution_context() already verifies that the
execution context stack is non-empty, we don't need to do it separately
here as well.
The old accumulator register is really only used to pass the end
completion to the caller of run_bytecode() nowadays. As such, we don't
need to cache a pointer to it for fast access. One less thing to do
on run_bytecode() entry.
This way it's always automatically correct, and we don't have to
manually flush it in push_execution_context().
~7% speedup on the MicroBench/call* tests :^)
Instead of letting every [[Call]] implementation allocate an
ExecutionContext, we now make that a responsibility of the caller.
The main point of this exercise is to allow the Call instruction
to write function arguments directly into the callee ExecutionContext
instead of copying them later.
This makes function calls significantly faster:
- 10-20% faster on micro-benchmarks (depending on argument count)
- 4% speedup on Kraken
- 2% speedup on Octane
- 5% speedup on JetStream
This allows us to get rid of instructions that move arguments to locals
and allocate smaller JS::Value vector in ExecutionContext by reusing
slots that were already allocated for arguments.
With this change for following function:
```js
function f(x, y) {
return x + y;
}
```
we now produce following bytecode:
```
[ 0] 0: Add dst:reg6, lhs:arg0, rhs:arg1
[ 10] Return value:reg6
```
instead of:
```
[ 0] 0: GetArgument 0, dst:x~1
[ 10] GetArgument 1, dst:y~0
[ 20] Add dst:reg6, lhs:x~1, rhs:y~0
[ 30] Return value:reg6
```
By doing that we avoid doing separate allocation for each such vector,
which was really expensive on js heavy websites. For example this change
helps to get EC allocation down from ~17% to ~2% on Google Maps. This
comes at cost of adding extra complexity to custom execution context
allocator, because EC no longer has fixed size and we need to maintain
a list of buckets.