When the cursor was positioned at the end of text,
attempting to move it left(using the left arrow key)
would fail because align_boundary() was rejecting
the end-of-text position as a valid boundary.
Our existing implementation of stream piping was extremely ad-hoc. It
did nothing to handle closed/errored streams, and did not read from or
write to streams in a way required by the spec.
This new implementation uses a custom JS::Cell to drive the read/write
loop.
This will be needed by Streams. To support this, we now store callbacks
in a hash table, keyed by an ID. Callers may use that ID to remove the
callback at a later point.
While debugging a spec-compliant implementation of ReadableStreamPipeTo,
I spent a lot of time inspecting promise internals. This is much less
noisy if we halve the number of temporary promises.
It is received from user JS as a double and is only used as a double in
all subsequent calculations. This bug would cause UBSAN errors in an
upcoming imported WPT test, which passes Infinity as the HWM.
Note there is an equivalent HWM for ReadableStream, which already stores
the value as a double.
When a message is posted to multiple ports at once, the order in which
the callbacks for these messages are invoked is non-deterministic.
To account for this, the test has been rewritten to accumulate logs
for each port separately, and then print them grouped by port.
This fixes a really nasty EventLoop bug which I debugged for 2 weeks.
The spin_until([&]{return completed_tasks == total_tasks;}) in
TraversableNavigable::check_if_unloading_is_canceled spins forever.
Cause of the bug:
check_if_unloading_is_canceled is called deferred
check_if_unloading_is_canceled creates a task:
queue_global_task(..., [&] {
...
completed_tasks++;
}));
This task is never executed.
queue_global_task calls TaskQueue::add
void TaskQueue::add(task)
{
m_tasks.append(task);
m_event_loop->schedule();
}
void HTML::EventLoop::schedule()
{
if (!m_system_event_loop_timer)
m_system_event_loop_timer = Timer::create_single_shot(
0, // delay
[&] { process(); });
if (!m_system_event_loop_timer->is_active())
m_system_event_loop_timer->restart();
}
EventLoop::process executes one task from task queue and calls
schedule again if there are more tasks.
So task processing relies on one single-shot zero-delay timer,
m_system_event_loop_timer.
Timers and other notification events are handled by Core::EventLoop
and Core::ThreadEventQueue, these are different from HTML::EventLoop
and HTML::TaskQueue mentioned above.
check_if_unloading_is_canceled is called using deferred_invoke
mechanism, different from m_system_event_loop_timer,
see Navigable::navigate and Core::EventLoop::deferred_invoke.
The core of the problem is that Core::EventLoop::pump is called again
(from spin_until) after timer fired but before its handler is executed.
In ThreadEventQueue::process events are moved into local variable before
executing. The first of those events is check_if_unloading_is_canceled.
One of the rest events is Web::HTML::EventLoop::process sheduled in
EventLoop::schedule using m_system_event_loop_timer.
When check_if_unloading_is_canceled calls queue_global_task its
m_system_event_loop_timer is still active because Timer::timer_event
was not yet called, so the timer is not restarted.
But Timer::timer_event (and hence EventLoop::process) will never execute
because check_if_unloading_is_canceled calls spin_until after
queue_global_task, and EventLoop::process is no longer in
event_queue.m_private->queued_events.
By making a single-shot timer manually-reset we are allowing it to fire
several times. So when spin_until is executed m_system_event_loop_timer
is fired again. Not an ideal solution, but this is the best I could
come up with. This commit makes the behavior match EventLoopImplUnix,
in which single-shot timer can also fire several times.
Adding event_queue.process(); at the start of pump like in EvtLoopImplQt
doesn't fix the problem.
Note: Timer::start calls EventReceiver::start_timer, which calls
EventLoop::register_timer with should_reload always set to true
(single-shot vs periodic are handled in Timer::timer_event instead),
so I use static_cast<Timer&>(object).is_single_shot() instead of
!should_reload.
This fixes the problem when none of the timers or notifiers get
executed if wake() is called frequently.
Note that calling WaitForMultipleObjects repeatedly until it fails
will not work because rapidly firing timer can get all the attention.
That's why I check every event individually with WaitForSingleObject.
This behavior matches EventLoopImplementationUnix.
and unregister_timer in EventLoopManagerWindows
Destructors for thread local objects are called before destructors of
global not thread local objects.
This is a partial stack of the problem, thread_data is already
destroyed at this point:
>WebContent.exe!Core::ThreadData::the
WebContent.exe!Core::EventLoopManagerWindows::unregister_notifier
WebContent.exe!Core::EventLoop::unregister_notifier
WebContent.exe!Core::Notifier::set_enabled
WebContent.exe!Core::LocalSocket::~LocalSocket
WebContent.exe!Requests::RequestClient::~RequestClient
WebContent.exe!Web::`dynamic atexit destructor for 's_resource_loader'
Bring back d6080d1fdc with a missing check
whether underlying socket is closed, before accessing `fd()` that is
optional and empty in case of closed socket.
This allows us to remove the BoundFunction::m_name field, which we
were initializing with a formatted FlyString on every function binding,
despite never using it for anything.
With this change TransportSocket becomes capable of sending large
messages without relying on workarounds, such as sending the message as
a shared memory file descriptor when it can't fully fit into the socket
buffer.
It's implemented by combining all enqueued messages into two buffers:
one for bytes and another for fds, and repeatedly attempts to write them
in smaller chunks, waiting for the socket to become writable again if
the receiver needs time to consume the data.
Another significant improvement brought by this change is that we no
longer drop messages queued for sending if the socket doesn't become
writable after a 100ms timeout. Instead, we return the message to the
send buffer and continue waiting for the socket to become writable.
FJCVTZS (Floating-point Javascript Convert to Signed fixed-point,
rounding toward Zero) does exactly what we need for ToInt32 in the
JavaScript specification.
This isn't world-changing, but it does give a ~2% boost on compute-
heavy benchmarks like JetStream, so we should obviously use it.