Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WP_XML_Tag_Processor and WP_XML_Processor #6713

Open
wants to merge 62 commits into
base: trunk
Choose a base branch
from

Conversation

adamziel
Copy link
Contributor

@adamziel adamziel commented Jun 3, 2024

Trac Ticket: Core-61365

What

Proposes an XML Tag Processor and XML Processor to complement the HTML Tag processor and the HTML Processor.

The XML API implements a subset of the XML 1.0 specification and supports documents with the following characteristics:

  • XML 1.0
  • Well-formed
  • UTF-8 encoded
  • Not standalone (so can use external entities)
  • No DTD, DOCTYPE, ATTLIST, ENTITY, or conditional sections

The API and ideas closely follow the HTML API implementation. The parser is streaming in nature, has a minimal memory footprint, and leaves unmodified markup as it was originally found.

It also supports streaming large XML documents, see adamziel#43 for proof of concept.

Design decisions

Ampersand handling in text and attribute values

XML attributes cannot contain the characters < or &.

Enforcing < is fast and easy. Enforcing & is slow and complex because ampersands are actually allowed when used as the start. This means we'd have to decode all entities as we scan through the document – it doesn't seem to be worth it.

Right now, the WP_XML_Tag_Processor will only bale out when attempting to explicitly access an attribute value or text containing an invalid entity reference.

Accepting all byte sequences up to < as text data

XML spec defines text data as follows:

[2] | Char | ::= | #x9 \| #xA \| #xD \| [#x20-#xD7FF] \| [#xE000-#xFFFD] \| [#x10000-#x10FFFF] | /* any Unicode character, excluding the surrogate blocks, FFFE, and FFFF. */

Currently WP_XML_Tag_Processor does not attempt to reject bytes outside of these ranges. It will treat everything between valid elements as text data. On the upside, we avoid using an expensive preg_match() call for processing text. On the downside, we won't bale out early when processing a malformed, truncated, or a non-UTF-8 document.

I think it's a good trade-off to take, but I'm also happy to discuss.

Entity decoding

This API only decodes the following entities:

  • Five character references mandated by the XML specification, that is &amp;, &lt;, &gt;, &quot;, &apos;
  • Numeric character references, e.g. &#123; or &#x1A;

Other entities trigger a parse error. This API could use the full HTML decoder, but it would be wrong and slow here. If lack of support for other entities ever becomes a problem, html_entity_decode() would be a good replacement for the custom implementation. It's worth noting that in XML1 mode it doesn't decode many HTML entities:

> html_entity_decode( '&oacute;' );
string(2) "ó"
> html_entity_decode( '&oacute;', ENT_XML1, 'UTF-8' );
string(8) "&oacute;"

Follow-up work

  • Usage examples
  • Document the WP_XML_Processor class as well as the WP_HTML_Processor class is documented today.
  • Add a ton more unit tests for other compliance and failure modes.

Out of scope for the foreseeable future

cc @dmsnell @sirreal

Copy link

github-actions bot commented Jun 3, 2024

Trac Ticket Missing

This pull request is missing a link to a Trac ticket. For a contribution to be considered, there must be a corresponding ticket in Trac.

To attach a pull request to a Trac ticket, please include the ticket's full URL in your pull request description. More information about contributing to WordPress on GitHub can be found in the Core Handbook.

Copy link

github-actions bot commented Jun 3, 2024

The following accounts have interacted with this PR and/or linked issues. I will continue to update these lists as activity occurs. You can also manually ask me to refresh this list by adding the props-bot label.

Core Committers: Use this line as a base for the props when committing in SVN:

Props zieladam, dmsnell, jonsurrell.

To understand the WordPress project's expectations around crediting contributors, please review the Contributor Attribution page in the Core Handbook.

Copy link

github-actions bot commented Jun 3, 2024

Test using WordPress Playground

The changes in this pull request can previewed and tested using a WordPress Playground instance.

WordPress Playground is an experimental project that creates a full WordPress instance entirely within the browser.

Some things to be aware of

  • The Plugin and Theme Directories cannot be accessed within Playground.
  • All changes will be lost when closing a tab with a Playground instance.
  • All changes will be lost when refreshing the page.
  • A fresh instance is created each time the link below is clicked.
  • Every time this pull request is updated, a new ZIP file containing all changes is created. If changes are not reflected in the Playground instance,
    it's possible that the most recent build failed, or has not completed. Check the list of workflow runs to be sure.

For more details about these limitations and more, check out the Limitations page in the WordPress Playground documentation.

Test this pull request with WordPress Playground.

* | ----------------|---------------------------------------------------------------------|
* | *Prolog* | The parser is parsing the prolog. |
* | *Element* | The parser is parsing the root element. |
* | *Misc* | The parser is parsing miscellaneous content. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be helpful to start without an XML Tag Processor and go straight to the XML Processor, following the design of the HTML Processor with its step() and step_in_body() following the different contexts of the HTML API spec.

Prolog, for example, sounds an awful lot like the IN PROLOG context, where the rules can be contained and indicate a state transition rather than embedding the rules for staging in the lower-level tag processing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, good call

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shipping only an XML Processor was quite tedious

out of curiosity, what was tedious about it? I have a hard time understanding the need for an XML tag processor, since the nesting rules are straightforward when compared with HTML, where the complicated semantic rules forced a split

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it difficult to:

  • Think about two different levels of parsing (tokenization, context) in a single class
  • Avoid intertwining them
  • Come up with names for all the methods

That being said, there's nothing inherently wrong with the approach and I can see how it could work.

@adamziel
Copy link
Contributor Author

adamziel commented Jun 4, 2024

There's only one actual test failure on PHP 7:

14) Tests_XmlApi_WpXmlTagProcessor::test_token_bookmark_span with data set "DIV end tag with attributes" ('</wp:content wp:post-type="x"..."yes">', 1, '</wp:content wp:post-type="x"..."yes">')
Bookmarked wrong span of text for full matched token.
Failed asserting that two strings are identical.
--- Expected
+++ Actual
@@ @@
-'</wp:content wp:post-type="x" disabled="yes">'
+''

Edit: That was an invalid test – XML has no attributes in tag closers.

The rest of it is the PHPUnit setup complaining about _doing_it_wrong messages in tests exercising the different failure modes. I tried resolving that with expectedIncorrectUsage and $this->setExpectedIncorrectUsage, but I can't figure it out.

@adamziel
Copy link
Contributor Author

adamziel commented Jun 4, 2024

I'm on the fence about get_inner_text(). It feels wrong, but it's quite useful for parsing WXR:

I ended up removing get_inner_text() – it made streaming difficult for little added benefit. Extracting inner text is easy – here's an example with streaming:

$wxr = file_get_contents(__DIR__ . '/test.wxr');
$xml_stream = stream_next_xml_token(chunk_text($wxr));
foreach($xml_stream as $processor) {
    if (
        // We could also match on #text nodes
        $processor->get_token_type() === '#cdata-section' && 
        $processor->matches_breadcrumbs(array('content:encoded'))
    ) {
        echo "\n " . dump_token($processor);
    }
}

* We limit this scan to 30 characters, which allows twenty zeros at the front.
*/
30
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this case I don't think we want a limit on these. for one, XML requires the trailing semicolon, and for two, we don't want to add rules to the spec and accidentally let something slip in, for example, by prepending a long reference in the front.

I can clean this up here, but I think it's important we not mush together all of the characters. something like this…

if ( '#' === $text[ $next_character_reference_at + 1 ] ) {
	$is_hex = 'x' === $text[ $next_character_reference_at + 2 ] || 'X' === $text[ … ];
	$zeros_start_at  = $next_character_reference_at + 3 + ( $is_hex ? 1 : 0 );
	$zeros_length    = strspn( $text, '0', $zeros_start_at );
	$digits_start_at = $zeros_start_at + $zeros_length;
	$digit_chars     = $is_hex ? '0123456789abcdefABCDEF' : '0123456789';
	$digits_length   = strspn( $text, $digit_chars, $digits_start_at );
	$semicolon_at    = $digits_start_at + $digits_length;

	// Must be followed by a semicolon.
	if ( ';' !== $text[ $semicolon_at ] ) {
		return false;
	}

	// Null bytes cannot be encoded in XML.
	if ( 0 === $digits_length ) {
		return false;
	}

	/*
	 * Must encode a valid Unicode code point.
	 * (Avoid parsing more than is necessary).
	 */
	$max_digits = $is_hex ? 6 : 7;
	if ( $digits_length > $max_digits ) {
		return false;
	}

	$base       = $is_hex ? 16 : 10;
	$code_point = intval( substr( $text, $digits_start_at, $digits_length ), $base );
	if ( if_allowable_code_point( $code_point ) ) {
		$decoded .= WP_HTML_Decoder::code_point_to_utf8_bytes( $code_point );
		continue;
	}

	return false;
}

// Must be a named character reference.
$name_starts_at = $next_character_reference_at + 1;

$standard_entities = array(
	'amp;'  => '&',
	'apos;' => "'",
	'gt;'   => '>',
	'lt;'   => '<',
	'quot;' => '"',
);

foreach ( $standard_entities as $name => $replacement ) {
	if ( substr_compare( $text, $name, $name_starts_at, strlen( $name ) ) ) {
		$decoded .= $replacement;
		continue;
	}
}

granted, we want to perform all length checks to avoid crashing, but I think we have to scan the entire thing to avoid mis-parsing and security issues

Copy link
Contributor Author

@adamziel adamziel Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, the decoder won't just skip over the 30 bytes when it stumbles upon &#x000000000000000000000000000000000000000000000000000000000000000000000123. If there's no valid reference within those 30 bytes, that's a parse failure and we halt. Still, I think you're right and it wouldn't be that bad. If someone loaded 1GB of data into memory, they must be expecting the parser will scan through it eventually.

@sirreal
Copy link
Member

sirreal commented Jun 5, 2024

Have you seen the Extensible Markup Language (XML) Conformance Test Suites? It may be a helpful resource to find a lot of test cases, although after a quick scan many cases seem to violate "No DTD, DOCTYPE, ATTLIST, ENTITY, or conditional sections"

… the MISC context where whitespace text nodes are always complete and non-whitespace ones are syntax errors
@adamziel
Copy link
Contributor Author

adamziel commented Jun 6, 2024

Have you seen the Extensible Markup Language (XML) Conformance Test Suites? It may be a helpful resource to find a lot of test cases, although after a quick scan many cases seem to violate "No DTD, DOCTYPE, ATTLIST, ENTITY, or conditional sections"

I haven't seen that one, thank you for sharing! With everything that's going on in Playground I may not be able to iron out all the remaining nuances here – would you like to take over at one point once XML parsing becomes a priority?

*
* @var bool
*/
protected $is_incomplete_text_node = false;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this name confused me. I think something like $inside_document_trailing_whitespace could be clearer, but I could still be confused

@dmsnell
Copy link
Member

dmsnell commented Jun 6, 2024

Remember too that text appears in XML outside of CDATA sections. Any text content between tags is still part of the inner text. CDATA is there to allow raw content without escaping it. For example, it's possible to embed HTML as-is inside CDATA whereas outside of it, all of the angle brackets and ampersands would need to be escaped.

In effect, CDATA could be viewed transparently in the decoder as part of any other text segment, where the only difference is that CDATA decoding is different than normal text decoding.

Probably need to examine the comment parser because XML cannot contain -- inside a comment. What @sirreal recommended about the test suite is a good idea - it would uncover all sorts of differences with HTML. Altogether though I don't think WordPress needs or wants a fully-spec-compliant parser. We want one that addresses our needs, which is a common type of incorrectly-generated XML. Maybe we could have a parser mode to reject all errors, but leave it off by default.

@adamziel
Copy link
Contributor Author

adamziel commented Jun 6, 2024

@dmsnell Text is supported, or at least should be – is there a specific scenario that this PR doesn't handle today?

@dmsnell
Copy link
Member

dmsnell commented Jun 6, 2024

@adamziel it was specifically your get_inner_text() replacement above

@adamziel
Copy link
Contributor Author

adamziel commented Jun 6, 2024

Oh that replacement is just overly selective, it can target text nodes, too.

By the way, there's no easy way of setting text in a tag without any text nodes. I'm noodling on emitting an empty text node, but haven't yet found a good way of doing it. Another idea would be to couple the tag tag token with the text that follows it, but that sounds unintuitive for the consumer of this API.

@dmsnell
Copy link
Member

dmsnell commented Jun 7, 2024

By the way, there's no easy way of setting text in a tag without any text nodes.

You're way too fast for me to keep up on this, but it shouldn't be that hard, because we will know where the start and end of a tag is, or if it's an empty tag, we know the full token.

We'll just need to create a function like set_inner_text() which matches the different cases and replaces the appropriate token or tokens to make it happen.

@dmsnell
Copy link
Member

dmsnell commented Jun 8, 2024

@adamziel I pushed some changes to the decoder to avoid mixing the HTML and XML parsing rules. Sadly, while I thought that PHP's html_entity_decode( $text, ENT_XML1 ) might be sufficient, it allows capital X in a hexadecimal numeric character reference, which is a divergence from the spec.

In mucking around I became aware of how much more the role of errors is going to have to play in an XML API. I don't have any idea what's best. Character encoding failures I would assume are going to be fairly benign as long as we treat those failures as plaintext instead of actually decoding them, but that's a point to ponder.

This is going to get interesting with documents mixing HTML and XML, such as WXR. We're going to need to ensure that the tag and text parsing rules are properly separated. I'm still not sure what that means for us when we find something like a WXR without proper escaping of the content inside.

@adamziel
Copy link
Contributor Author

You're way too fast for me to keep up on this, but it shouldn't be that hard, because we will know where the start and end of a tag is, or if it's an empty tag, we know the full token.

Streaming makes it a bit more difficult, e.g. we may not have the closer yet, or we may not have the opener anymore. Perhaps pausing on incomplete input before yielding the tag opened would be useful here.

@adamziel
Copy link
Contributor Author

@adamziel I pushed some changes to the decoder to avoid mixing the HTML and XML parsing rules. Sadly, while I thought that PHP's html_entity_decode( $text, ENT_XML1 ) might be sufficient, it allows capital X in a hexadecimal numeric character reference, which is a divergence from the spec.

Oh dang it! Too bad.

In mucking around I became aware of how much more the role of errors is going to have to play in an XML API. I don't have any idea what's best. Character encoding failures I would assume are going to be fairly benign as long as we treat those failures as plaintext instead of actually decoding them, but that's a point to ponder.

Yes, that struck me too. At minimum, we'll need to communicate on which byte offset the error has occurred. Ideally, we'd show the context of the document, highlight the relevant part, and give a highly informative error message.

This is going to get interesting with documents mixing HTML and XML, such as WXR. We're going to need to ensure that the tag and text parsing rules are properly separated. I'm still not sure what that means for us when we find something like a WXR without proper escaping of the content inside.

I'm not sure I follow. By escaping of the content do you mean, say, missing <<CDATA[ opener, or having an HTML CDATA-lookalike comment inside of an XML CDATA section? There isn't much we can do, other than marking specific tags as PCDATA and stripping the initial CDATA opener and final CDATA closer.

@dmsnell
Copy link
Member

dmsnell commented Jun 10, 2024

Streaming makes it a bit more difficult, e.g. we may not have the closer yet, or we may not have the opener anymore. Perhaps pausing on incomplete input before yielding the tag opened would be useful here.

My plan with the HTML API is to allow a "soft limit" on memory use. If we need to we can add an additional "hard limit" where it will fail. Should content come in and we're still inside a token, we just keep ballooning past the soft limit until we run out of memory, hit the hard limit, or parse the token.

So in this way I don't see streaming as a problem. The goal is to enable low-latency and low-overhead processing, but if we have to balloon in order to handle more extreme documents we can break the rules as long as it's not too wild.

I prototyped this with the 🔔 Notifications on WordPress.com though in a slightly different way. The Notifications Processor has a runtime duration limit and a length limit, counting code points in the text nodes of the processed document. If it hits the runtime duration limit, it stops processing formatting and instead rapidly joins the remaining text nodes as unformatted plaintext. If it hits the length limit it stops processing altogether.

I believe that this Processor framework opens up new avenues for constraints and graceful degradation beyond those limits.

Yes, that struck me too. At minimum, we'll need to communicate on which byte offset the error has occurred. Ideally, we'd show the context of the document, highlight the relevant part, and give a highly informative error message.

This could be an interesting operating mode: when bailing, produce an Elm/Rust-quality error message. The character reference errors make me think we also have some notion of recoverable and non-recoverable errors. The character reference error does not cause syntactic issues, so we can zoom past them if we want, collecting them along the way into a list of errors. Errors for things like invalid tag names, attribute names, etc… are similar.

Missing or unexpected tags though I could see as being more severe since they have ambiguous impact on the rest of the document.

I'm not sure I follow. By escaping of the content do you mean, say, missing <<CDATA[ opener, or having an HTML CDATA-lookalike comment inside of an XML CDATA section? There isn't much we can do, other than marking specific tags as PCDATA and stripping the initial CDATA opener and final CDATA closer.

Some WXRs I've seen will have something like <content:encoded><[CDATA[<p>This is a post</p>]]></content:encoded>. Others, instead of relying on CDATA directly encode the HTML (Blogger does this) and it looks like <content:encoded>&lt;p&gt;This is a post&lt;/p&gt;</content:encoded>. The former is more easily recognizable.

I believe that I've seen WXRs that are broken by not encoding the HTML either way, and these are the ones that scare me: <content:encoded><p>This is a post</p></content:encoded>. Or maybe it has been (non-WXR) RSS feeds where I've seen this.

Because the embedded document isn't encoded there's no boundary to detect it. I think this implies that for WordPress, our XML parser could have reason to be itself a blend of XML and HTML. For example:

  • If a tag is known to be an HTML tag interpret it as part of the surrounding text content.
  • Like you brought up, list known WXR or XML tags and treat them differently.
  • Directly encode rules for HTML-containing tags that we see in practice. We could have a list, even a list of breadcrumbs where they may be found.

This exploration is quite helpful because I think it's starting to highlight how the shape of WordPress' XML parsing needs differ from those of HTML.

@dmsnell
Copy link
Member

dmsnell commented Jun 10, 2024

💡This will generally not be in the most critical performance hot path. We can probably relax some of the excessive optimizations, like relying on more functions to separate concepts like parse_tag_name(), parse_attribute_name(), and the like. This modularity would probably aid in comprehension, particularly since XML's rules are more constrained than HTML's.

Update: You already had this idea 😆

adamziel added a commit to adamziel/wxr-normalize that referenced this pull request Jul 15, 2024
Brings together a few explorations to stream-rewrite site URLs in a WXR file coming
from a remote server. All of that with no curl, DOMDocument, or other
PHP dependencies. It's just a few small libraries built with WordPress
core in mind:

* [AsyncHttp\Client](WordPress/blueprints#52)
* [WP_XML_Processor](WordPress/wordpress-develop#6713)
* [WP_Block_Markup_Url_Processor](https://github.com/adamziel/site-transfer-protocol)
* [WP_HTML_Tag_Processor](https://developer.wordpress.org/reference/classes/wp_html_tag_processor/)

Here's what the rewriter looks like:

```php
$wxr_url = "https://github.com/raw/WordPress/blueprints/normalize-wxr-assets/blueprints/stylish-press-clone/woo-products.wxr";
$xml_processor = new WP_XML_Processor('', [], WP_XML_Processor::IN_PROLOG_CONTEXT);
foreach( stream_remote_file( $wxr_url ) as $chunk ) {
    $xml_processor->stream_append_xml($chunk);
    foreach ( xml_next_content_node_for_rewriting( $xml_processor ) as $text ) {
        $string_new_site_url           = 'https://mynew.site/';
        $parsed_new_site_url           = WP_URL::parse( $string_new_site_url );

        $current_site_url              = 'https://github.com/raw/wordpress/blueprints/normalize-wxr-assets/blueprints/stylish-press-clone/wxr-assets/';
        $parsed_current_site_url       = WP_URL::parse( $current_site_url );

        $base_url = 'https://playground.internal';
        $url_processor = new WP_Block_Markup_Url_Processor( $text, $base_url );

        foreach ( html_next_url( $url_processor, $current_site_url ) as $parsed_matched_url ) {
            $updated_raw_url = rewrite_url(
                $url_processor->get_raw_url(),
                $parsed_matched_url,
                $parsed_current_site_url,
                $parsed_new_site_url
            );
            $url_processor->set_raw_url( $updated_raw_url );
        }

        $updated_text = $url_processor->get_updated_html();
        if ($updated_text !== $text) {
            $xml_processor->set_modifiable_text($updated_text);
        }
    }
    echo $xml_processor->get_processed_xml();
}
echo $xml_processor->get_unprocessed_xml();
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants