...of Array. If array has simple storage, which implies that attributes
of all indexed properties are default, and newly added property also
has default attribute, we can do a fast path and skip lots of checks
that happen in `Object::internal_define_own_property()`.
If array has packed index property storage without holes, we could check
if indexed property is present simple by checking if it's less than
array's length.
Makes the following program go 1.1x faster:
```js
function f() {
let array = [];
for (let i = 0; i < 3_000; i++) {
array.push(i);
}
for (let i = 0; i < 10_000; i++) {
array.map(x => x * 2);
}
}
f();
```
This one is required to cover the case when new empty elements are
introduced by assigning to element with index > length, like:
```js
var x = [];
x[0] = 1;
x[2] = 2;
```
...by avoiding `CreateListFromArrayLike` in cases when we could directly
use elements of underlying object's indexed properties storage.
Makes this program go 2.1x faster:
```js
function target(a, b, c) {
return a + b + c;
}
const args = [1, 2, 3];
let result = 0;
(function() {
for (let i = 0; i < 10_000_000; i++) {
result += target.apply(null, args);
}
})();
```
Before this change each built-in iterator object has a boolean
`m_next_method_was_redefined`. If user code later changed the iterator’s
prototype (e.g. `Object.setPrototypeOf()`), we still believed the
built-in fast-path was safe and skipped the user supplied override,
producing wrong results.
With this change
`BuiltinIterator::as_builtin_iterator_if_next_is_not_redefined()` looks
up the current `next` property and verifies that it is still the
built-in native function.
This commit adds the minimal export macros needed to run js.exe on
windows. A followup commit is planned to move to explicit export
entirely.
A static_assert for the size of a struct is also ifdef'ed out as the
semantics around object layout and inheritance are different on MSVC abi
and the struct IteratorRecord ends up being 40 bytes not 32.
...when Array.prototype and Object.prototype are intact.
If `internal_set()` is called on an array exotic object with a numeric
PropertyKey, and:
- the prototype chain has not been modified (i.e., there are no getters
or setters for indexed properties), and
- the array is not the target of a Proxy object,
then we can directly store the value in the receiver's indexed
properties, without checking whether it already exists somewhere in the
prototype chain.
1.7x improvement on the following program:
```js
function f() {
let a = [];
let i = 0;
while (i < 10_000_000) {
a.push(i);
i++;
}
}
f();
```
Replace the implementation of maths in `UnsignedBigInteger`
and `SignedBigInteger` with LibTomMath. This gives benefits in terms of
less code to maintain, correctness and speed.
These changes also remove now-unsued methods and improve the error
propagation for functions allocating lots of memory. Additionally, the
new implementation is always trimmed and won't have dangling zeros when
exporting it.
Having it as a method instead of a free function is necessary for the
next commits and generally allows for optimizations that require deeper
access into the `UnsignedBigInteger` / `SignedBigInteger`.
Also restrict the exponent to 32 bits to avoid huge memory allocations.
This is functionaly the same since caller_context is the topmost
execution context on the stack, but makes it more clear that
we are directly inheriting the strict mode from the caller context
when pushing the next context on to the stack.
- Avoids unnecessary conversions between StringOrSymbol and PropertyKey
on the hot path of property access.
- Simplifies the code by removing StringOrSymbol and using PropertyKey
directly. There was no reason to have a separate StringOrSymbol type
representing the same data as PropertyKey, just with the index key
stored as a string.
PropertyKey has been updated to use a tagged pointer instead of a
Variant, so it still occupies 8 bytes, same as StringOrSymbol.
12% improvement on JetStream/gcc-loops.cpp.js
12% improvement on MicroBench/object-assign.js
7% improvement on MicroBench/object-keys.js
By doing that we avoid lots of `PropertyKey` -> `Value` -> `PropertyKey`
transforms, which are quite expensive because of underlying
`FlyString` -> `PrimitiveString` -> `FlyString` conversions.
10% improvement on MicroBench/object-keys.js
This commit adds a convenience method to secure random for initializing
single types. It changes the random number generator in JS math random()
to use newer constants by the author as well as initializes it with a
higher quality seed.
We already have fast path for built-in iterators that skips `next()`
lookup and iteration result object allocation applied for `for..of` and
`for..in` loops. This change extends it to `iterator_step()` to cover
`Array.from()`, `[...arr]` and many other cases.
Makes following function go 2.35x faster on my computer:
```js
(function f() {
let arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
for (let i = 0; i < 1000000; i++) {
let [a, ...rest] = arr;
}
})();
```
81b6a11 regressed correctness by always bypassing the `next()` method
resolution for built-in iterators, causing incorrect behavior when
`next()` was redefined on built-in prototypes. This change fixes the
issue by storing a flag on built-in prototypes indicating whether
`next()` has ever been redefined.
By following the spec to the letter, our mapped arguments objects ended
up with many extra GC allocations:
- 1 extra Object for the internal [[ParameterMap]].
- 2 extra NativeFunctions for each mapped parameter accessor.
- 1 extra Accessor to hold the aforementioned NativeFunctions.
This patch removes all those allocations and lets ArgumentsObject model
the desired behavior in custom C++ instead of using script primitives.
1.06x speedup on Speedometer's TodoMVC-jQuery.
Saves us two NativeFunction allocations per each `await`.
With this change, following function goes 80% faster on my computer:
```js
(async () => {
const resolved = Promise.resolve();
for (let i = 0; i < 5_000_000; i++) {
await resolved;
}
})();
```
- Create less GC pressure by making each `await` in async function skip
iteration result object allocation.
- Skip uncached `Object::get()` calls to extract `value` and `done` from
the iteration result object.
With this change, following function goes 30% faster on my computer:
```js
(async () => {
const resolved = Promise.resolve();
for (let i = 0; i < 5_000_000; i++) {
await resolved;
}
})();
```
Instead of creating a second ExecutionContext in BoundFunction.[[Call]],
we now implement BoundFunction::get_stack_frame_size() and combine
information from the target + the bound arguments list.
This allows BoundFunction.[[Call]] to reuse the already-established
ExecutionContext for the callee.
1.20x speedup on MicroBench/bound-call-04-args.js
This is *extremely* common on the web, but barely shows up at all in
JavaScript benchmarks.
A typical example is setting Element.innerHTML on a HTMLDivElement.
HTMLDivElement doesn't have innerHTML, so it has to travel up the
prototype chain until it finds it.
Before this change, we didn't cache this at all, so we had to travel
the prototype chain every time a setter like this was used.
We now use the same mechanism we already had for GetBydId and cache
PutById setter accesses in the prototype chain as well.
1.74x speedup on MicroBench/setter-in-prototype-chain.js
`var` bindings are never in the temporal dead zone (TDZ), and so we
know accessing them will not throw.
We now take advantage of this by having a specialized environment
binding value getter that doesn't check for exceptional cases.
1.08x speedup on JetStream.
We were previously unable to use simdutf for base64 decoding operations
other than "loose". Upstream has added support for the "strict" and
"stop-before-partial" operations, so let's make use of them!
I was investigating an optimization in this area, and while it
didn't seem to have a noticable improvement, it still seems
useful to apply this change.