81b6a11 regressed correctness by always bypassing the `next()` method
resolution for built-in iterators, causing incorrect behavior when
`next()` was redefined on built-in prototypes. This change fixes the
issue by storing a flag on built-in prototypes indicating whether
`next()` has ever been redefined.
https://tc39.es/ecma262/#sec-jobs specifies that we should only be
running queued promise jobs and host-defined cleanup when the
execution context stack is empty. It is asserted to _not_ be empty
the line above, so remove it.
No impact on test262 or our test suites, Interpreter::run_executable
is already (incorrectly) performing this unconditionally.
run_promise_jobs also happens to do nothing when LibJS is embedded
into LibWeb.
Instead of monomorphic (1 shape), GetById inline caches are now
polymorphic (4 shapes).
This improves inline cache hit rates greatly on most web JavaScript.
For example, Speedometer 2.1 sees 88% -> 97% cache hit rate improvement.
1.71x speedup on MicroBench/pic-get-own.js
1.82x speedup on MicroBench/pic-get-pchain.js
This is *extremely* common on the web, but barely shows up at all in
JavaScript benchmarks.
A typical example is setting Element.innerHTML on a HTMLDivElement.
HTMLDivElement doesn't have innerHTML, so it has to travel up the
prototype chain until it finds it.
Before this change, we didn't cache this at all, so we had to travel
the prototype chain every time a setter like this was used.
We now use the same mechanism we already had for GetBydId and cache
PutById setter accesses in the prototype chain as well.
1.74x speedup on MicroBench/setter-in-prototype-chain.js
`var` bindings are never in the temporal dead zone (TDZ), and so we
know accessing them will not throw.
We now take advantage of this by having a specialized environment
binding value getter that doesn't check for exceptional cases.
1.08x speedup on JetStream.
Before this change, setting a global would end up as SetLexicalBinding.
That instruction always failed to cache the access if the global was a
property of the global object.
1.14x speedup on Octane/earley-boyer.js
2.04x speedup on MicroBench/for-of.js
Note that MicroBench/for-of.js was more of a "set global" benchmark
before this. After this change, it's actually a for..of benchmark. :^)
We were spending a lot of time removing each property name from the
iterator's underlying HashMap while iterating over it. This wasn't
actually necessary, so let's stop doing it and instead just iterate
over the property names with a stored HashTable iterator.
1.10x speedup on MicroBench/for-in-indexed-properties.js
Before this change, we would call [[OwnPropertyKeys]] on the target
objects, then convert the returned keys from Value into PropertyKey.
Then, when actually iterating, we'd convert them back into Value again.
This was particularly costly for numeric property keys, since we had
to go through string-from-number construction.
Now, we simply keep the original values returned by [[OwnPropertyKeys]]
around and use them for the enumeration.
1.09x speedup on MicroBench/for-in-indexed-properties.js
1.01x speedup on MicroBench/for-in-named-properties.js
I was investigating an optimization in this area, and while it
didn't seem to have a noticable improvement, it still seems
useful to apply this change.
Even though this code was already optimized to re-use a single result
object, returning { value, done } directly in output parameters still
provides a substantial speedup.
1.21x speedup on MicroBench/for-in-indexed-properties.js
Apply a little ensure_capacity() to avoid excessive rehashing of the
property key table when enumerating a large number of properties.
1.23x speedup on MicroBench/for-in-indexed-properties.js
...by avoiding `{ value, done }` iterator result value allocation. This
change applies the same otimization 81b6a11 added for `for..in` and
`for..of`.
Makes following micro benchmark go 22% faster on my computer:
```js
function f() {
const arr = [];
for (let i = 0; i < 10_000_000; i++) {
arr.push([i]);
}
let sum = 0;
for (let [i] of arr) {
sum += i;
}
}
f();
```
Introduce special instruction for `for..of` and `for..in` loop that
skips `{ value, done }` result object allocation if iterator is builtin
(array, map, set, string). This reduces GC pressure significantly and
avoids extracting the `value` and `done` properties.
This change makes this micro benchmark 48% faster on my computer:
```js
const arr = new Array(10_000_000);
let counter = 0;
for (let _ of arr) {
counter++;
}
```
This reverts commit 36bb2824a6.
Although this was faster on my M3 MacBook Pro, other Apple machines
disagree, including our benchmark runner. So let's revert it.
This is a simple trick to generate better native code for access to
registers, locals, and constants. Before this change, each access had
to first dereference the member pointer in Interpreter, and then get to
the values. Now we always have a pointer directly to the values on hand.
Here's how it looks:
class StackFrame {
public:
Value get(Operand) const;
void set(Operand, Value);
private:
Value m_values[];
};
And we just place one of these as a window on top of the execution
context's array of values (registers, locals, and constants).
Getting the running_execution_context() already verifies that the
execution context stack is non-empty, we don't need to do it separately
here as well.
The old accumulator register is really only used to pass the end
completion to the caller of run_bytecode() nowadays. As such, we don't
need to cache a pointer to it for fast access. One less thing to do
on run_bytecode() entry.
This way it's always automatically correct, and we don't have to
manually flush it in push_execution_context().
~7% speedup on the MicroBench/call* tests :^)
Instead of letting every [[Call]] implementation allocate an
ExecutionContext, we now make that a responsibility of the caller.
The main point of this exercise is to allow the Call instruction
to write function arguments directly into the callee ExecutionContext
instead of copying them later.
This makes function calls significantly faster:
- 10-20% faster on micro-benchmarks (depending on argument count)
- 4% speedup on Kraken
- 2% speedup on Octane
- 5% speedup on JetStream
By doing that we avoid doing separate allocation for each such vector,
which was really expensive on js heavy websites. For example this change
helps to get EC allocation down from ~17% to ~2% on Google Maps. This
comes at cost of adding extra complexity to custom execution context
allocator, because EC no longer has fixed size and we need to maintain
a list of buckets.
This is better because:
- Better data locality
- Allocate vector for registers+constants+locals+arguments in one go
instead of allocating two vectors separately
This allows us to remove the BoundFunction::m_name field, which we
were initializing with a formatted FlyString on every function binding,
despite never using it for anything.
Before this change, we were inlining this function after every
handler for instructions that could throw.
Forcing it out-of-line shrinks the main bytecode interpreter by 15%
and yields a decent 2.5% speedup on JetStream/gcc-loops.cpp.js
We can use the index's invalid state to signal an empty optional.
This makes Optional<StringTableIndex> 4 bytes instead of 8,
shrinking every bytecode instruction that uses these.
The special empty value (that we use for array holes, Optional<Value>
when empty and a few other other placeholder/sentinel tasks) still
exists, but you now create one via JS::js_special_empty_value() and
check for it with Value::is_special_empty_value().
The main idea here is to make it very unlikely to accidentally create an
unexpected special empty value.