-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add integration test & fixtures setup for more confidence #1
Conversation
I don't think we need that much new test setup to do E2E tests. Prism already has something similar. I.e. this markup test tests a plugin included in markup that adds an attribute to all entities. So we can do E2E testing already but the test format is really inconvenient. My idea would be to add a "new" test format. The "new" format would be exactly the same as the current This would also work for you, right? |
@RunDevelopment agreed... I think however that we're missing an opportunity to not re-invent this wheel and move away from Prism's handrolled format. I wonder if the extra maintenance work on the custom runner is worth it when we could move to the new community standard that is easy to work with. When I first saw the Additionally:
|
This comment has been minimized.
This comment has been minimized.
I also like the idea of keeping the existing tokenizer tests (and only adding new e2e tests as needed) so staying in the |
I get what you mean. Snapshots are pretty much an industry standard nowadays that make it easy to write tests. However, our format has a few advantages over snapshots specifically in our domain:
How would snapshots solve this? |
Sorry, that not what I was referring to. I meant the "don't have to worry about line endings" part. I don't see how any snapshot solution would let us choose the line ends if the code is stored in JS files. Also, I think that the problem with GitHub desktop is something GitHub desktop should fix. GitHub and VSCode (the tools I use) have no problem with this.
This whole test suite was designed so we don't have to do steps 1 and 2 for every test... Also, we want most tests to only consider the token stream and not the HTML string. The token stream is a lot more compact and readable. Humans (e.g. I) have to be able to understand the token stream and that's not easy by going through a bunch of HTML. |
I agree, like I said here I think keeping the tokenizer tests around for quickly verifying the basic token stream is a totally valid use case. My point is that this style of e2e test makes it easy to contribute/verify expected behaviour for the entire system, and you would only add these tests as needed, when you want to verify that the entire produced HTML string is what you expect and are willing to check it in its entirety. This will of course be the exception but I think it really helps those who are unfamiliar with the codebase see how everything comes together and makes it so that it's easy to contribute without having to worry about breaking things for others who have put the time in to write these kinds of tests. EDIT:
just to state it explicitly, the produced html code is not stored in JS files, if that's what you were referring to? :) |
Doesn't |
Ahh sorry, I missed that part. I thought you wanted to replace all language tests with snapshots. But this opens another question: Do you intend this to replace the current If possible, I would like to replace the current
No, I was talking about the input code. Escaping is annoying but there's also a more pressing issue: We have to able to get the input code for the analyzer detecting exponential backtracking. The input code in these test files might be the only one covering some code path that uses a regex with exponential backtracking. If we miss even one of them, it's another CVE. Being able to access the input code is also important for another PR that hasn't been merged yet. It will essentially do code coverage based on our language tests. (Sadly, Istanbul doesn't really work with Prism. Istanbul doesn't know which source files are being executed because of Could there be an easy way for other tests to get the input code?
Oof, you're right. Can't fault GitHub desktop for that one. |
bbf83bc
to
078491a
Compare
e580a33
to
6fbafae
Compare
clean up test setup with helpers add more tests to show more aliases and plugin work add input type def test
@RunDevelopment I would happily update all the other .js tests to this format in a separate PR(s) if we are good to move forward with this -- I really think it's so much easier to work with and so much less setup than the existing To address the issue of being able to access the input code (that's a great consideration, thank you for pointing it out as I wasn't aware): I think we could use fixtures files which export arrays of examples and generate the tests that way. We can then share the fixtures with the other instrumentation that needs it. I'll put up another commit right here to show what I mean. |
@RunDevelopment I've pushed a couple commits here, one with the proposed shareable fixtures and a couple more to fix up the tokens and the tokenizer hook. I think the tip of this branch will be what I will be going forward with. In addition to the reasoning provided in this comment I think the most valuable thing about snapshot testing here is the diffable html formatting that makes parsing the snapshots while working on a failing set of cases much easier. I found it very difficult to parse the formatting in the existing I think the use of template strings in the shareable fixtures addresses concerns 1. and 2. mentioned here. I think the other points do come down to personal preference, but again, I think since the userbase for snapshots is pretty high I'd say the majority would be in favour of snapshots over the custom format. Since I am suggesting this format for tests on an as needed basis only, to help with more complex cases like this one e.g., I think the lower maintenance overhead is so much better going forward. |
A huge plus for the diffable HTML from me.
Agreed.
Snapshots are popular because they are a very simple solution for a problem - storing the expected result of a test. It doesn't matter how you do it as long as it's convenient and simple to use for developers. I've been thinking about this yesterday and I want to hear your thoughts. First of all, I think a category of E2E test makes sense for Prism but:
Secondly, since the test data is now in fixture files, wouldn't it make sense to simply change our current |
That's fair. I'm not too hung up on what they're called, it was just important to me to get all the components I'm using in this example wired up in an easy to work with (fast feedback with mocha's
The reason for this is [somewhat]:
I would be interested in a way that tests all hooks registered...so that it is truly "end to end". In a separate PR though.
Are we? https://prismjs.com/examples/prism-graphql.html doesn't render the code properly with any theme. |
Yup, that's a problem. Ideally, we could have a file structure like this:
Is there really no way for us to control where the snap file lives?
But this organization also divides our tests into two kinds even though it's not strictly necessary. We will also have to create a new file for each fixture to run it, right? Also, these HTML tests are not exclusively used to test "the integration between components", they are also used by single components.
All files in the I was talking about our plugin pages. Each plugin has a page with examples. These examples typically showcase and test all features. Example. Languages aren't E2E-tested because it's not typically necessary. As I said, they don't use the DOM, only some even use hooks, and the ones that do use hooks, typically only use the hooks within |
Unfortunately, no there is no way that I've found to determine where the snap file lives or to create a new snap file for each assertion. I think the tradeoff is worth it, even though the tests have to be "divided". I think it simplifies the original test suite, being able to just run the I think the rest of your comments are something we can discuss in follow up PRs. I've renamed the e2e misnomer now :) -- do you think this is an acceptable pattern & tradeoff for ease of use?
Could you elaborate on this? Do you mean the other |
But that means that to add a single test, I now have to create 2 files. Maybe an example: Let's say I have a language like JavaScript. I fixed a bug, add one test using your system, and make a PR - PR1. Then, while PR1 is still open (review takes a while), I add a new feature to JS, add a test for that feature using your system, and make a PR - PR2.
These tests aren't supposed to test the tokenizer but the languages. We have tests specifically for the tokenization process as well.
But this is also true for the current system, is it not?
Yes, we also have |
That's what I'm suggesting can be fixed (replaced) with this. Now it's very easy to have a test that is very easy to work with. The effort of adding 2 files per test file (or combination, say
I don't think it's as easy to use as this, especially if you want to test the "final" ( Resolving merge conflicts in snapshot files is usually (>95% of the time) easy. The process is handled for you by simply rebasing PR1 on PR2 (or PR2 on PR1 (master) if PR1 is merged first) and running |
I opened PrismJS#2939 so that we can discuss these changes separately from the language changes underneath. :) |
@RunDevelopment I thought about wanting to keep the tests I had written in PrismJS#2927 passing in their current form without removing the workaround code I had for the plugins, but then I thought I could take it a step further and have "end to end" tests for the entire Prism lifecycle...
Curious what you would think of this, if it would be worth exploring separately? I would love to have something like this to have full confidence on top of all the great unit tests Prism already has; however, this rabbit hole is kind of endless in terms of combinations of plugins and languages so it could get out of hand...
I think it would be reasonable to test one language at a time like this, with a folder structure in the e2e folder that made it clear what plugins and other components you wanted to test for full confidence. What do you think?