Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss conformance testing plan #24

Closed
caniszczyk opened this issue Sep 13, 2018 · 25 comments
Closed

Discuss conformance testing plan #24

caniszczyk opened this issue Sep 13, 2018 · 25 comments

Comments

@caniszczyk
Copy link
Contributor

caniszczyk commented Sep 13, 2018

https://groups.google.com/a/opencontainers.org/forum/#!topic/dev/jL5yhEIv3Ac

@caniszczyk caniszczyk changed the title Discuss conformance testing Discuss conformance testing plan Sep 13, 2018
@dongsupark
Copy link
Contributor

I have listed HTTP endpoints to be tested for the distribution certification.
Basically it's a simplified version of the current distribution spec.

For example, the test can be started like this:

make test URL=https://my-repositories.example.com

===

  1. API version check
  • a simple API version check
  • Input: GET /v2/
  • Output:
    • expected:
      • status: 200
      • response header: Docker-Distribution-API-Version: registry/2.0
    • error:
      • status: 201, 401, 404
  1. Pulling an image
  • Pulling an image manifest
  • Input: GET /v2/<name>/manifests/<reference>
  • Output:
    • expected:
      • status: 200
      • response data: defined in the spec
    • error:
      • status: 404
  • Checking existing manifests
  • Input: HEAD /v2/<name>/manifests/<reference>
  • Output:
    • expected:
      • status: 200
      • response data: defined in the spec
    • error:
      • status: 404
  • Pulling a layer
  • Input: GET /v2/<name>/blobs/<digest>
  • Output:
    • expected:
      • status: 200
      • response data: defined in the spec
    • error:
      • status: 307
  1. Pushing an image
  • Starting an upload and uploading the layer
  • Input: POST /v2/<name>/blobs/uploads/
  • Output:
    • expected:
      • status: 202
      • response data: defined in the spec
  • checking existing layers
  • Input: HEAD /v2/<name>/blobs/<digest>
  • Output:
    • expected:
      • status: 200
  • checking uploading progress
  • Input: GET /v2/<name>/blobs/uploads/<uuid>
  • Output:
    • expected:
      • status: 200
      • response data: defined in the spec
    • error:
      • status: 204
  • monolithic upload
  • Input: PUT /v2/<name>/blobs/uploads/<uuid>?digest=<digest>
  • Output:
    • expected:
      • status: 202
    • error:
      • status: 404
  • chunked upload
  • Input: PATCH /v2/<name>/blobs/uploads/<uuid>
  • Output:
    • expected:
      • status: 202
    • error:
      • status: 416
  • completed upload
  • Input: PUT /v2/<name>/blobs/uploads/<uuid>?digest=<digest>
  • Output:
    • expected:
      • status: 201
  • canceling upload
  • Input: DELETE /v2/<name>/blobs/uploads/<uuid>
  • Output:
    • expected:
      • status:
  • cross repository blob mount
  • Input: POST /v2/<name>/blobs/uploads/?mount=<digest>&from=<repository name>
  • Output:
    • expected:
      • status: 201
    • error:
      • status: 202, fall back to the standard upload
  • deleting a layer
  • Input: DELETE /v2/<name>/blobs/<digest>
  • Output:
    • expected:
      • status: 202
    • error:
      • status: 404
  • pushing an image manifest
  • Input: PUT /v2/<name>/manifests/<reference>
  • Output:
    • expected:
      • status:
    • error:
      • status: 4xx
  1. Listing Repositories
  • a simple listing
  • Input: GET /v2/_catalog
  • Output:
    • expected:
      • status: 200
  • Listing Image Tags
  • Input: GET /v2/<name>/tags/list
  • Output:
    • expected:
      • status: 200
  • Deleting an image
  • Input: DELETE /v2/<name>/manifests/<reference>
  • Output:
    • expected:
      • status: 202
    • error:
      • status: 404

@caniszczyk
Copy link
Contributor Author

Looks like a good start to me RFC: @opencontainers/distribution-spec-maintainers

@dmcgowan
Copy link
Member

Agreed, looks like a good starting

@amouat
Copy link
Contributor

amouat commented Oct 1, 2018 via email

@mikebrow
Copy link
Member

mikebrow commented Oct 1, 2018

@amouat "The 4xx class of status code is intended for cases in which the client seems to have erred." Such as requesting an action that can't take place in the server because the blob does not exist. It's not an error for the server to return it, it's a report to the client that the action can't take place.

@dongsupark
Copy link
Contributor

Recently I tried to create a proof-of-concept for the conformance tests. Please see https://github.com/kinvolk/ocicert/tree/dongsu/initial-poc
(Note, it's still ugly. In the future I might have to rewrite it again.)

Also I have tried to figure out what's the most neutral approach for conformance tests for the distribution-spec. However, at the moment, the only possible approach seems to be to rely on the real-world implementations such as Docker registry v2.

@dmcgowan I have some questions to your initial comment.

As for the implementation of these tests, we can discuss that after we have a list of what to test. The most registry-specific part will be related to authentication. A simple way to handle this could just be to have the test harness shell out to a binary given the endpoint and 401 authenticate header, and return the authorization header to use. We could have an implementation of this which could read the credentials from a docker config to handle 99% of the registries which require auth.

Can you please give me some examples about that?
What I have been thinking about is to have a simple authentication server like https://github.com/cesanta/docker_auth, as well as a testing tool that connects to the auth server and an actual distribution endpoint like https://github.com/kinvolk/ocicert/tree/dongsu/initial-poc.
Is that what you mean?

@jzelinskie
Copy link
Member

I think we should agree on or write a reference implementation of a distribution library that then gets used to test these endpoints, rather than implementing something from scratch for testing.

There are a bunch of difference places we could start:

After we have a process for this, it would be neat to have a list of compliant registries that get validated via a CI pipeline on this repository.

@vbatts
Copy link
Member

vbatts commented Oct 11, 2018

completely agree with jimmyZ

@caniszczyk
Copy link
Contributor Author

caniszczyk commented Oct 11, 2018 via email

@dongsupark
Copy link
Contributor

dongsupark commented Oct 12, 2018

@jzelinskie Thanks for the suggestion.
I completely agree that we need a common distribution library.

oci-fetch really looks like a tool that does what I wanted to do.
However, that tool looks quite outdated. I've tested it against docker.io and quay.io, and it didn't work as expected. And in the past there was a discussion about rewriting it based on containers/image, but apparently no one has done it yet.

Instead I would rather create a new tool that depends on containers/image in order to do the tests.

@vbatts
Copy link
Member

vbatts commented Oct 15, 2018

@dmcgowan looks like a question to you here #24 (comment)

@vbatts
Copy link
Member

vbatts commented Oct 15, 2018

@dongsupark that seems like a fine approach. It is intended to be common code.

@dongsupark
Copy link
Contributor

Short update:

Recently I created simple tests, based on containers/image. Those tests use API provided by the containers/image library. For that, I created a PR, to be able to expose a part of its internal library functions.

However, some folks were not excited about the idea of exposing internal functions, also about using the internal API of containers/image for the distribution-spec tests. One idea is to directly communicate to the HTTP layer. In that case, containers/image will be used only for authentication.

Is that ok?

@runcom
Copy link
Member

runcom commented Nov 1, 2018

I believe we need to state clearly what we're testing and how.

  1. if we are testing the endpoint of a registry for conformance reasons, any http client (see curl, wget) would be enough
  2. if we use containers/image or docker/distribution, then the verifications and conformance should go and just use the API those libraries define.

The difference may be little, but the cases above are two completely different approaches.

Also, for the matter of conformance testing for the spec, I do believe a test suite which just uses tools like curl would be enough.
If we instead are testing the interoperability of said libraries, then we need a test matrix which exercises the libraries using only the public apis (and that means using the public golang apis we expose in the libraries, not the raw http client we use internally).

@runcom
Copy link
Member

runcom commented Nov 1, 2018

I guess what I'm trying to say is:

  1. if we are testing that a registry is conformant, then you can use curl just fine or any http library like the golang one and setup something like the image validation stuff in image-tools or using ginkgo like the CRI-tools project does for validating that a given runtime is conformant to the CRI spec
  2. if we are making sure that a given registry implementing the distribution spec can be used using known libraries, then we need a matrix of the libraries and more importantly, we need to use the public api of the libraries, not the internal http client/implementation

I remember during the runtime-spec conformance stuff, I was being asked to create something to check for conformance. It wasn't going to runc and use it, runc was the validation, runtime-tools was the way to check it (I don't remember how that ended up though, I have not working on it eventually)

@runcom
Copy link
Member

runcom commented Nov 1, 2018

I also think the second point is out of scope here but it's certainly something those libraries could and should check

@dongsupark
Copy link
Contributor

@runcom
The 1st one, testing the endpoint of a registry for conformance reasons has been what we wanted in the first place. Though in the middle of conversations, some folks suggested reusing libraries like containers/image. That's why I started doing the 2nd one as well, to achieve both options.

If you think that the 2nd point is out of scope, then I'm fine with going for the option 1. We would then need to make a separate issue for the 2nd point.

@runcom
Copy link
Member

runcom commented Nov 1, 2018

well I'm not saying we have to go with 1 but I suspect that's what we need for this issue and as you said 2 could follow.

@vbatts wdyt?

@vbatts
Copy link
Member

vbatts commented Nov 8, 2018

1 and 2 are good distinctions. Validating a server's http api is the first to check, and it would be something to consider to have a very minimal PoC server for validating client libraries against.

I agree that containers/image and many others have ways of talking to a registry, though for a conformance test, they might be overkill. In the meantime having curl or even python would be fine. Golang too. Just a test suite to call against that HTTP endpoint. The challenge I see is the authentication, as many of these services may be proprietary and need some auth to access all these functions. So, despite people's opinions on how auth should be handled, it must be handled for a conformance test.

@dongsupark
Copy link
Contributor

Following suggestions from @runcom and @vbatts , I took a different approach for the tests. Now it does not depend on libraries such as containers/image. It directly communicates to the HTTP endpoint. Wrote an independent part for authentication as well.

Please see kinvolk-archives/ocicert#1 for details. Though it might be still incomplete.

@vbatts
Copy link
Member

vbatts commented Dec 5, 2018

discussed in the meeting today. We should set up dedicated time to discuss

@rchincha
Copy link
Contributor

Hi, thought it may be useful bringing up zot [1] and we have added a compliance [3] suite as per comments in [2]. Pls. note that the dist-spec itself is not finalized yet.

[1] https://github.com/anuvu/zot
[2] https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg#October-2-2019
[3] https://github.com/anuvu/zot#compliance-checks

@vbatts
Copy link
Member

vbatts commented Dec 16, 2019

I have merged the last of @dongsupark tests.
Also @jdolitsky is working on this now.

pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 16, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 16, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
pmengelbert added a commit to bloodorangeio/distribution-spec that referenced this issue Jan 17, 2020
Added new conformance directory in the project root, with a number of
test files written in Go. Tests can be compiled by running `go test -c`
in the conformance directory and executing the created conformance.test
file.

In order for the tests to run, registry providers will need to set up
certain environment variables with the root url, the namespace of a
repository, and authentication information. Additionally, the OCI_DEBUG
variable can be set to "true" for more detailed output.

The tests create two report files: report.html and junit.xml. The html
report is expandable if more detailed information is needed on failures.

Related to opencontainers#24

Signed-off-by: Peter Engelbert <pmengelbert@gmail.com>
@jdolitsky
Copy link
Member

I think its fair to close this, conformance tests have been introduced in master and discussions are ongoing outside this issue

@dmcgowan
Copy link
Member

dmcgowan commented Mar 4, 2020

Thanks @jdolitsky!

@dmcgowan dmcgowan closed this as completed Mar 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants