Calc simplification (which I'm working towards) involves repeatedly
deriving a new calculation tree from an existing one, and in many
cases, either the whole result or a portion of it will be identical to
that of the original. Using RefPtr lets us avoid making unnecessary
copies. As a bonus it will also make it easier to return either `this`
or a new node.
In future we could also cache commonly-used nodes, similar to how we do
so for 1px and 0px LengthStyleValues and various keywords.
To be properly compatible with calc(), the resolved() methods all need:
- A length resolution context
- To return an Optional, as the calculation might not be resolvable
A bonus of this is that we can get rid of the overloads of `resolved()`
as they now all behave the same way.
A downside is a scattering of `value_or()` wherever these are used. It
might be the case that all unresolvable calculations have been rejected
before this point, but I'm not confident, and so I'll leave it like
this for now.
Initially I added this to the existing CalculationContext, but in
reality, we have some data at parse-time and different data at
resolve-time, so it made more sense to keep those separate.
Instead of needing a variety of methods for resolving a Foo, depending
on whether we have a Layout::Node available, or a percentage basis, or
a length resolution context... put those in a
CalculationResolutionContext, and just pass that one thing to these
methods. This also removes the need for separate resolve_*_percentage()
methods, because we can just pass the percentage basis in to the regular
resolve_foo() method.
This also corrects the issue that *any* calculation may need to resolve
lengths, but we previously only passed a length resolution context to
specific types in some situations. Now, they can all have one available,
though it's up to the caller to provide it.
Prior to this change, we invalidated all elements in the document if it
used any selectors with :has(). This change aims to improve that by
applying a combination of techniques:
- Collect metadata for each element if it was matched against a selector
with :has() in the subject position. This is needed to invalidate all
elements that could be affected by selectors like `div:has(.a:empty)`
because they are not covered by the invalidation sets.
- Use invalidation sets to invalidate elements that are affected by
selectors with :has() in a non-subject position.
Selectors like `.a:has(.b) + .c` still cause whole-document invalidation
because invalidation sets cover only descendants, not siblings. As a
result, there is no performance improvement on github.com due to this
limitation. However, youtube.com and discord.com benefit from this
change.
By using ancestor filters some selectors could be early rejected
skipping selector engine invocation. According to my measurements it's
30-80% hover selectors depending on the website.
We have an optimization that allows us to invalidate only the style of
the element itself and mark descendants for inherited properties update
when the "style" attribute changes (unless there are any CSS rules that
use the "style" attribute, then we also invalidate all descendants that
might be affected by those rules). This optimization was not taking into
account that when the inline style has custom properties, we also need
to invalidate all descendants whose style might be affected by them.
This change fixes this bug by saving a flag in Element that indicates
whether its style depends on any custom properties and then invalidating
all descendants with this flag set when the "style" attribute changes.
Unlike font relative lengths invalidation, for elements that depend on
custom properties, we need to actually recompute the style, instead of
individual properties, because values without expanded custom properties
are gone after cascading, and it has to be done again.
The test added for this change is a version of an existing test we had
restructured such that it doesn't trigger aggressive style invalidation
caused by DOM structured changes until the last moment when test results
are printed.
This means we only need to consider rules from the document and the
current shadow root, instead of the document and *every* shadow root.
Dramatically reduces the amount of rules processed on many pages.
Shaves 2.5 seconds of load time off of https://wpt.fyi/ :^)
For all invalidation properties nested into nth-child argument list we
need to invalidate whole subtree to make sure style of sibling elements
will be recalculated.
Instead of creating and passing around Vector<MatchingRule> inside
StyleComputer (internally, not exposed in API), we now use vectors
of pointers/references instead.
Note that we use pointers in case we want to quick_sort() the vectors.
Knocks 4 seconds of loading time off of https://wpt.fyi/
Instead, change the APIs from "has :foo" to "may have :foo" and return
true if we don't have a valid rule cache at the moment.
This allows us to defer the rebuilding of the rule cache until a later
time, for the cost of a wider invalidation at the moment.
Do note that if our rule cache is invalid, the whole document has
invalid style anyway! So this is actually always less work. :^)
Knocks ~1 second of loading time off of https://wpt.fyi/
Implements idea described in
https://docs.google.com/document/d/1vEW86DaeVs4uQzNFI5R-_xS9TcS1Cs_EUsHRSgCHGu8
Invalidation sets are used to reduce the number of elements marked for
style recalculation by collecting metadata from style rules about the
dependencies between properties that could affect an element’s style.
Currently, this optimization is only applied to style invalidation
triggered by class list mutations on an element.
Same again, although rotation is more complicated: `rotate`
is "equivalent to" multiple different transform function depending on
its arguments. So we can parse as one of those instead of the full
`rotate3d()`, but then need to handle this when serializing.
The only ways this varies from the `scale()` function is with parsing
and serialization. Parsing stays separate, and serialization is done by
telling `TransformationStyleValue` which property it is, and overriding
its normal `to_string()` code for properties other than `transform`.
Invalidation for adopted style sheets was broken because we had an
assumption that "active" style sheet is always attached to
StyleSheetList which is not true for adopted style sheets. This change
addresses that by keeping track of all documents/shadow roots that own
a style sheet and notifying them about invalidation instead of going
through the StyleSheetList.
`current_property_id()` is insufficient to determine if a quirk is
allowed. For example, unitless lengths are allowed in certain
properties, but NOT if they are inside a calc() or other function. It's
also incorrect when we are parsing a longhand inside a shorthand. So
instead, replace that with a stack of value-parsing contexts. For now,
this is either properties or CSS functions, but in future can be
expanded to include media features and other places.
This lets us disallow quirks inside functions, like we're supposed to.
It also lays the groundwork for being able to more easily determine
what type a percentage inside a calculation should become, as this is
based on the same stack of contexts.
URL::basic_parse has a subtle bug where the resulting URL is not set
to valid when StateOveride is provided and the URL parser early returns
a valid URL.
This has not surfaced as a problem so far, as the only users of the
state override API provide an already valid URL buffer and also ignore
the result of basic parsing with a state override.
However, this bug surfaces implementing the URL pattern spec, which as
part of URL canonicalization:
* Provides a dummy URL record
* Basic URL parses that URL with state override
* Checks the result of the URL parser to validate the URL
While we could set URL validity on every early return of the URL parser
during state override, it has been a long standing FIXME around the code
to try and remove the awkward validity state of the URL class. So this
commit makes the first stage of this change by migrating the basic
parser API to return Optional, which also happens to make this subtle
issue not a problem any more.
Rather than partly-converting number, dimension, and ident tokens at the
start of parsing a calculation, and then later finishing it off, we can
just do the whole step in convert_to_calculation_node(). This is a
little less code, but mainly means we are left with only a single use
of the Dimension type in the codebase, so that can be removed soon.
Various places in the spec allow for `<number> | <percentage>`, but this
is either/or, and they are not allowed to be combined like dimensions
and percentages are. (For example, `calc(12 + 50%)` is never valid.)
User code generally doesn't need to care about this distinction, but it
does now need to check if a calculation resolves to a number, or to a
percentage, instead of a single call.
The existing parse_number_percentage[_value]() methods have been kept
for simplicity, but updated to check for number/percentage separately.
An upcoming change requires that we can determine which property we are
parsing before we parse the value. That's the opposite of what this
code previously did, which was to parse a generic dimension or calc()
and then figure out what property would accept it.