Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question/Proposal] Adding tests for invalid JSON schemas #623

Closed
DrDeano opened this issue Nov 30, 2022 · 6 comments
Closed

[Question/Proposal] Adding tests for invalid JSON schemas #623

DrDeano opened this issue Nov 30, 2022 · 6 comments

Comments

@DrDeano
Copy link

DrDeano commented Nov 30, 2022

Currently, all test files contain valid schema and are within specification. For some validators, there are MUST constrains in the specification such as in minItems where the integer must be a non-negative integer.

Can there be tests for invalid schemas ensuring these constrains are met, for example: {"minItems": -1}, and ensuring that the implementation recognizes that this is invalid.

Proposed starting format for discussion: adding in valid_schema:

[
    {
        "description": "minItems validation",
        "schema": {"minItems": 1},
        "valid_schema": true,
        "tests": [
            {
                "description": "longer is valid",
                "data": [1, 2],
                "valid": true
            }
        ]
    },
    {
        "description": "minItems invalid schema",
        "schema": {"minItems": -1},
        "valid_schema": false,
        "tests": []
    }
]

Trying not to change the existing format too much to not break exiting test.

@gregsdennis
Copy link
Member

We do have some tests that use the meta-schema to validate the schema (e.g. 2020-12's $defs tests).

This is actually the preferred way to do this because some implementations can't deserialize invalid-schema JSON into their schema models (because they're invalid).

I think these would be a welcome addition to the test suite, but before you go about writing a bunch of tests, let some other core members chime in.

@Julian
Copy link
Member

Julian commented Dec 1, 2022

Yeah there's 2 classes of issues here -- validating a schema against the metaschema we do in many places (and there's an open PR with more but which needs extracting into smaller PRs, that's #354 -- help welcome).

Testing schemas which are entirely invalid, or invalid in ways not covered under the metaschema is a second category (#581 is relevant there, as is #589). The issue isn't really coming up with a test schema for these kinds of tests, doing so is somewhat easy -- it's a big backwards incompatible change to introduce them alongside the existing tests, so the real "issue" if there is one is doing this the same way the current output test PR (#619) is doing -- i.e. creating a side-by-side folder with tests and adding these there.

I support it though, and there are plenty we could add. My concrete recommendation would be to start by helping with breaking up #354, which covers tests which already fit in the current scope, but if you want to head straight for the other ones, also would be helpful clearly.

@DrDeano
Copy link
Author

DrDeano commented Dec 1, 2022

Thank you for the clarification. 🚀 I would be happy to help out on #354. Do you have a vision for breaking this up? Would it be a PR for each keyword? Could I suggest the meta-schema tests be added as a separate file and not part of the main keyword test files as I haven't implemented $ref yet. For example, adding a meta/ folder or meta-*.json files as discussed in the #354 PR.

@awwright
Copy link
Member

awwright commented Dec 1, 2022

Yeah I think we definitely need tests for "parsing JSON Schema documents" and not just "validating instances against a schema". There's at least a few that we can add right now, like the examples that you gave, and other recognized keyword names with undefined values/arguments.

In theory, an implementation that passes all of the validity tests would also pass schema valid/invalid tests, but in practice implementations often handle schemas and instances differently, and meta-schemas only cover special cases of validating schemas, they can't really test this generally.

Finally there are some schemas that authors MUST NOT write, but this doesn't always mean the validator must produce an error. And there are some schemas that ought to be invalid, but erroring isn't actually required. What qualifies as a valid or invalid schema is not completely defined, so I am paying a lot of attention to this and trying to figure out what is needed from the spec itself.

@Julian
Copy link
Member

Julian commented Dec 1, 2022

Do you have a vision for breaking this up? Would it be a PR for each keyword?

I think the only "major" issue we had in the PR was its size, so as long as you have some way of making them small, anything's fine with me. Per keyword is fine.

Could I suggest the meta-schema tests be added as a separate file and not part of the main keyword test files as I haven't implemented $ref yet.

To be clear, we already have tests of this type, and they're already not separated, they're mixed in with the keyword they're testing (just now they're testing the metaschema's treatment of the keyword).

Adding new subfolders like meta/ within the tests/ folder is a breaking change which involves all downstream implementations who use the suite "noticing" and figuring out how to modify their test runners to deal with the new folder, so I'm usually not very keen on that idea personally unless it's got a lot of justification -- my suspicions are always that this is a key reason implementations don't always keep up with the suite.

I'm personally more open to adding meta-*.json files, but since we already have a bunch of these and they're alongside existing tests, I don't think we should mix style there on how we write these -- we should either stick to what we have and decide whether to move every meta-schema related test afterward to a separate file, or else move all the ones we have first, merge that, and then proceed with adding to the tests we have.

Either way though I'm not sure I see how you're going to be able to run them (even the ones that already exist) if you haven't implemented $ref support? Probably you should skip the tests in your runner until you do I guess.

@Julian
Copy link
Member

Julian commented Dec 12, 2022

Going to close this as effectively duplicating #589 and/or #354 / #244 but if that's wrong and/or you have other thoughts (OP or anyone) obviously follow up!

@Julian Julian closed this as not planned Won't fix, can't repro, duplicate, stale Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants