We currently have a single IPC to set clipboard data. We will also need
an IPC to retrieve that data from the UI. This defines system clipboard
data in LibWeb to handle this transfer, and adds the IPC to provide it.
This adds a basic settings page to manage persistent Ladybird settings.
As a first pass, this exposes settings for the new tab page URL and the
default search engine.
The way the search engine option works is that once search is enabled,
the user must choose their default search engine; we do not apply any
default automatically. Search remains disabled until this is done.
There are a couple of improvements that we should make here:
* Settings changes are not broadcasted to all open about:settings pages.
So if two instances are open, and the user changes the search engine
in one instance, the other instance will have a stale UI.
* Adding an IPC per setting is going to get annoying. It would be nice
if we can come up with a smaller set of IPCs to send only the relevant
changed settings.
When we inspect a DOM node, we currently serialize many properties for
that node, including its layout, computed style, used fonts, etc. Now
that we aren't piggy-backing on the Inspector interface, we can instead
only serialize the specific information required by DevTools.
The intent is that this will replace the separate Task Manager window.
This will allow us to more easily add features such as actual process
management, better rendering of the process table, etc. Included in this
page is the ability to sort table rows.
This also lays the ground work for more internal `about` pages, such as
about:config.
Site isolation is a common technique to reduce the chance that malicious
sites can access data from other sites. When the user navigates, we now
check if the target site is the same as the current site. If not, we
instruct the UI to perform the navigation in a new WebContent process.
The phrase "site" here is defined as the public suffix of the URL plus
one level. This means that navigating from "www.example.com" to
"sub.example.com" remains in the same process.
There's plenty of room for optimization around this. For example, we can
create a spare WebContent process ahead of time to hot-swap the target
site. We can also create a policy to keep the navigated-from process
around, in case the user quickly navigates back.
The "on_received_console_message" and "on_received_console_messages"
were indistinguishable in purpose based on their name. This renames them
to:
on_console_message_available - WebContent has output a console message
and it is available for the client to retrieve.
on_received_styled_console_messages - WebContent has replied to a
request for the available console messages.
The "styled" qualifier is used here to indicate that the messages have
been styled with CSS for display in a WebView. This is to prepare for
an upcoming patch where DevToolsConsoleClient will not stylize the
output; DevTools will want the raw JS values.
The `cursor` property accepts a list of possible cursors, which behave
as a fallback: We use whichever cursor is the first available one. This
is a little complicated because initially, any remote images have not
loaded, so we need to use the fallback standard cursor, and then switch
to another when it loads.
So, ComputedValues stores a Vector of cursors, and then in EventHandler
we scan down that list until we find a cursor that's ready for use.
The spec defines cursors as being `<url>`, but allows for `<image>`
instead. That includes functions like `linear-gradient()`.
This commit implements image cursors in the Qt UI, but not AppKit.
This supports evaluating the script and replying with the result. We
currently serialize JS objects to a string, but we will need to support
dynamic interaction with the objects over IPC. This does not yet support
sending console messages to DevTools.
Some tests take longer than others, and so may want to set a custom
timeout so that they pass, without increasing the timeout for all other
tests. For example, this is done in WPT.
Add an `internals.setTestTimeout(milliseconds)` method that overrides
the test runner's default timeout for the currently-run test.
This isn't entirely symmetrical with on_load_start as it will also fire
on reloads and back/forward navigations. However, it's good enough for
some basic use cases, and we can do more sophisticated notifications
later on when we need them.
The WebContentView widget now inherits from GUI::ScrollableWidget and
will pass its scroll offset over to the WebContent process as it's
changing. This is not super efficient and can get a bit laggy, but it
will do fine as an initial scrolling implementation.
Just like the single-process Web::PageView widget, WebContentView also
paints scrolled content by translating the Gfx::Painter.
After layout, we may want to repaint the page, so we now listen for the
PageClient::page_did_invalidate() notification and use it to drive a
client-side repaint.
Note that an invalidation request from LibWeb makes a full roundtrip
to the WebContent client and back since the client drives painting.
The "WebContent" service provides a very restricted instance of LibWeb
running as an unprivileged user account. This will be used to implement
process separation in Browser, among other things.
This first cut of the service only spawns a single WebContent process
when someone connects to /tmp/portal/webcontent. We will soon switch
this over to spawning a new process for each connection.
Since this feature is very immature, we'll be bringing it up inside of
Demos/WebView as a separate demo program. Eventually this will become
a reusable widget that anyone can embed and easily get out-of-process
web content in their GUI.
This is pretty, pretty cool! :^)