There's a bit of a UTF-8 assumption with this change. But nearly every
caller of these methods were immediately creating a String from the
resulting ByteString anyways.
Our handling of 'optional' return values was previously not correct
in that we would always call 'create_data_property' for every
single member of the returned dictionary, even if that property did
not have a value (by falling back to JS::js_null).
This was resulting in a massive number of test failures for URL
pattern which was expecting 'undefined' as the member value, instead
of 'null'.
This is a homegrown implementation that wasn't actually used in
dependent classes. If this is needed in the future, using OpenSSL would
probably be a better option.
This change ensures that instead of immediately deallocating the message
buffer after sending, we retain it in an acknowledgement wait queue
until an acknowledgement is received from the peer. This is necessary
to handle a behavior of the macOS kernel, which may prematurely
garbage-collect file descriptors contained within the message buffer
before the peer receives them.
The acknowledgement mechanism assumes messages are received in the same
order they were sent so, each acknowledgement message simply indicates
the count of successfully received messages, specifying how many entries
can safely be removed from the acknowledgement wait queue.
Similar to the existing macros for compile options and link options,
this macro wraps the command line definitions for swiftc in a way that
avoids warnings about conditional compilation flags not having values.
This commit removes the -Wno-unusued-private-field flag, thus
reenabling the warning. Unused field were either removed or marked
[[maybe_unused]] when unsure.
The goal here is to do something a bit smarter with the parsing here
than we do for properties. Instead of the JSON saying "here are the
values, and here are the keywords, and we can have up to 3", here we
place the syntax in the JSON directly (though currently broken up as
one string per option) and then we attempt to parse each one in
sequence. It's something we'll need eventually for `@property` among
other things.
...However, in this first pass, I've gone with the simplest option of
hard-coding the types instead of figuring them out properly. So there's
a PositivePercentage type and a UnicodeRangeTokens type, instead of
properly implementing the grammar for those in a generic way.
Add a new JSON file describing at-rule descriptors, and then use it to
generate a DescriptorID enum, and code to check if it's accepted in a
given at-rule.
Turns out we need this directory to pass to the -frontend command
for creating the interop header, so refactor the whole find module
to find it on each platform.
Add the proper annotations for the Cell and Cell::Visitor classes to be
visible in Swift. This lets us remove some OpaquePointer shinangians in
the Swift bindings.
This patch adds a workaround for a Swift issue where boolean bitfields
with getters and setters in SWIFT_UNSAFE_REFERENCE types are improperly
imported, causing an ICE.
It turned out that some web applications want to send fairly large
messages to WebWorker through IPC (for example, MapLibre GL sends
~1200KiB), which led to failures (at least on macOS) because buffer size
of TransportSocket is limited to 128KiB. This change solves the problem
by wrapping messages that exceed socket buffer size into another message
that holds wrapped message content in shared memory.
Co-Authored-By: Luke Wilde <luke@ladybird.org>
In CMake 4.0, having a minimum policy version set to less than 3.5 is
a hard error at configure time. Add an override until the issue can be
resolved in vcpkg itself.
This removes the old autoplay allowlist file in favor of the new site
setting. We still support the command-line flag to enable autoplay
globally, as this is needed for WPT.
When we build internal pages (e.g. about:settings), there is currently
quite a lot of boilerplate needed to communicate between the browser and
the page. This includes creating IDL for the page and the IPC for every
message sent between the processes.
These internal pages are also special in that they have privileged
access to and control over the browser process.
The framework introduced here serves to ease the setup of new internal
pages and to reduce the access that WebContent processes have to the
browser process. WebUI pages can send requests to the browser process
via a `ladybird.sendMessage` API. Responses from the browser are passed
through a WebUIMessage event. So, for example, an internal page may:
ladybird.sendMessage("getDataFor", { id: 123 });
document.addEventListener("WebUIMessage", event => {
if (event.name === "gotData") {
console.assert(event.data.id === 123);
}
});
To handle these messages, we set up a new IPC connection between the
browser and WebContent processes. This connection is torn down when
the user navigates away from the internal page.
This fixes the frame-ancestors WPT tests from crashing when an iframe
is blocked from loading. This is because it would get an undefined
location.href from the cross-origin iframe, which causes a crash as it
expects it to be there.
This involves yeeting the 'invalid' union member as it was not really
checked against properly anyway; now the 'invalid' state is simply
StringData*{nullptr}, which was assumed to not exist previously.
Note that this is still accessing inactive union members, but is
promising to the compiler that they're fine where they are (the provided
debug macro AK_STRINGBASE_VERIFY_LAUNDER_DEBUG makes the
would-be-UB-if-not-for-launder ops verify that the operation is correct)
Should fix the GCC build.
These will be used by a similar generator for CSS at-rule descriptors.
For `get_snake_case_function_name_for_css_property_name()`, I've rolled
its behaviour into `snake_casify()` with an optional ability to trim
leading underscores.
Busybox version of sort utility doesn't support --version-sort and
--check CLI args, only their short alternatives (-V and -c,
respectively).
Use the short alternatives to gain the busybox compatibility.
On some macs, the default maximum number of file descriptors is 256.
This quickly makes the WPT runner run out of descriptors, so let's check
the active value and increase the soft limit if necessary.
"Functional" as in "it's a function token" and not "it works", because
the behaviour for these is unimplemented. :^)
This is modeled after the pseudo-class parsing, but with some changes
based on things I don't like about that implementation. I've
implemented the `<pt-name-selector>` parameter used by view-transitions
for now, but nothing else.
Pseudo-elements have specific rules about which CSS properties can be
applied to them. This is a first step to supporting that.
- If a property whitelist isn't present, all properties are allowed.
- Properties are named as in CSS.
- Names of property groups are prefixed with `#`, which makes this match
the spec more clearly. These groups are implemented directly in the
code generator for now.
- Any property name beginning with "FIXME:" is ignored, so we can mark
properties we don't implement yet.
We previously supported a few -webkit vendor-prefixed pseudo-elements.
This patch adds those back, along with -moz equivalents, by aliasing
them to standard ones. They behave identically, except for serializing
with their original name, just like for unrecognized -webkit
pseudo-elements.
It's likely to be a while before the forms spec settles and authors
start using the new pseudo-elements, so until then, we can still make
use of styles they've written for the non-standard ones.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.