Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an API concepts document and describe terminology and API chunking #6540

Merged
merged 2 commits into from
Jan 25, 2018

Conversation

smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Dec 2, 2017

Create a new API reference page that covers some high level topics

  • terminology, paths, verbs, watching resources, chunking, and content
    type negotation.

Needed a place to define chunking, and we don't have a doc for it.

@kubernetes/sig-api-machinery-api-reviews

Covers kubernetes/enhancements#365


This change is Reviewable

@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 2, 2017
@smarterclayton
Copy link
Contributor Author

@kubernetes/api-reviewers a new high level API doc for the public docs so I had someplace to document chunking.

@smarterclayton smarterclayton added this to the 1.9 milestone Dec 2, 2017
@k8sio-netlify-preview-bot
Copy link
Collaborator

k8sio-netlify-preview-bot commented Dec 2, 2017

Deploy preview for kubernetes-io-master-staging ready!

Built with commit 1c7d896

https://deploy-preview-6540--kubernetes-io-master-staging.netlify.com

Copy link
Member

@justinsb justinsb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nits / feedback, but LGTM

* A list of instances of a resource type is known as a **collection**
* A single instance of the resource type is called a **resource**

All resource types are either scoped by the cluster (`/apis/GROUP/VERSION/*`) or to a namespace (`/apis/GROUP/VERSION/NAMESPACE/*`). A namespace-scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope. The following paths are used to retrieve collections and resources:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namespaces/<namespace>/, I think?

"items": [...] // returns pods 1001-1253
}

Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and t hen perform a watch operation on the full set without missing any updates.
Copy link
Member

@justinsb justinsb Dec 2, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: extra space in then ("and t hen perform a watch operation")


### Protobuf encoding

Kubernetes uses an envelope wrapper to encode Protobuf responses, since unlike JSON a Protobuf message is not self-describing (the client cannot determine the type).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point you're trying to make here didn't immediately click with me.

Out of interest, why didn't we just require that TypeMeta always be field 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is by convention today, but we wanted to make it easy for a client to sniff content type as well for things on disk or in etcd. Proto doesn't require field 1 to be at the beginning of a serialized object (at least, pretty sure it doesn't) so we wouldn't have been able to sniff for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, we also wanted to support gzipping in the future.

```

Clients that receive a response in `application/vnd.kubernetes.protobuf` that does not match the expected prefix should reject the response since
future versions may need to alter the serialization format in an incompatible way, and will do so by changing the prefix.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd hope we define a new MIME type instead!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't help on disk, unfortunately.

* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE` - return collection of all instances of the resource type in NAMESPACE
* `GET /apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME` - return the instance of the resource type with NAME in NAMESPACE

Since a namespace is a cluster-scoped resource type, you can retrieve the list of all namespaces with `GET /api/v1/namespaces` and details about a particular namespace with `GET /api/v1/namespaces/name`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GET /api/v1/namespaces/name

GET /api/v1/namespaces/NAME


On large clusters, retrieving the collection of some resource types may result in very large responses that can impact the server and client. For instance, a cluster may have tens of thousands of pods, each of which is 1-2kb of encoded JSON. Retrieving all pods across all namespaces may result in a very large response (10-20MB) and consume a large amount of server resources. Starting in Kubernetes 1.9 the server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request. Each chunk can be returned sequentially which reduces both the total size of the request and allows user-oriented clients to display results incrementally to improve responsiveness.

To retrieve a single list in chunks, two new parameters `limit` and `continue` are supported on collection requestns and a new field `continue` is returned from all list operations in the list `metadata` field. A client should specify the maximum results they wish to receive in each chunk with `limit` and the server will return up to `limit` resources in the result and include a `continue` value if there are more resources in the collection. The client can then pass this `continue` value to the server on the next request to instruct the server to return the next chunk of results. By continuing until the server returns an empty `continue` value the client can consume the full set of results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

supported on collection requestns

supported on collection requests

Create a new API reference page that covers some high level topics
- terminology, paths, verbs, watching resources, chunking, and content
  type negotation.
@smarterclayton
Copy link
Contributor Author

Updated with nits fixed.

@smarterclayton
Copy link
Contributor Author

Any other reviewers have comments?


## Efficient detection of changes

To enable clients to build a model of the current state of a cluster, all Kubernetes object resource types are required to support consistent lists and an incremental change notification feed called a **watch**. Every Kubernetes object has a `resourceVersion` field representing the version of that resource as stored in the underlying database. When retrieving a collection of resources (either namespace or cluster scoped), the response from the server will contain a `resourceVersion` value that can be used to initiate a watch against the server. The server will return all changes (creates, deletes, and updates) that occur after the supplied `resourceVersion`. This allows a client to fetch the current state and then watch for changes without missing any updates. If the client watch is disconnected they can restart a new watch from the last returned `resourceVersion`, or perform a new collection request and begin again.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

restart a new watch from the last returned resourceVersion

Ooh, is this official? 📜 👑
We've bumped into how to restart watch from same point in ruby kubeclient (ManageIQ/kubeclient#275) and I see python client too (kubernetes-client/python#124)...

So far all I read said resourceVersion is opaque and it wasn't clear they come from same timeline for different objects and their collection...
Unlike List that returns both collection resourceVersion and individual objects' versions, during watch you only see updates to individual objects' versions.

I see experimentally that the collection's resourceVersion as returned by List is de-facto same for all collections, increments on any change to object in any collection, and equals resourceVersion of that last changed object, so this would work.

So, can clients assume collection resourceVersion >= max(obj.resourceVersion for obj in collection)?
Can clients assume watching collection from max(obj.resourceVersion seen during previous watch) will yield same point watch stopped? (as long as that history is not lost...)

More things that would be great if docs explicitly allowed or disallowed:

  • Can clients take a collection version (from List) and use it on a single resource watch?
  • Can clients take version from one collection and use it on another collection?
  • Can clients do any kind of "greater than" comparisons between versions?
    Or only know that last string seen, over single watch, is semantically latest?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooh, is this official?

Yes.

We've bumped into how to restart watch from same point

You can always attempt to restart from the last resourceVersion you received. If the api server no longer has enough history to let you start from that point, it will return a 410 error and you'll have to relist to get a new fresh resourceVersion

So far all I read said resourceVersion is opaque and it wasn't clear they come from same timeline for different objects and their collection...

That is correct. You should not make any assumptions about how two resourceVersions relate to each other.

I see experimentally that the collection's resourceVersion as returned by List is de-facto same for all collections, increments on any change to object in any collection, and equals resourceVersion of that last changed object, so this would work.

Things you can do:

  • Remember the last resourceVersion you received for a resource, and use it when re-establishing a watch for that resource
  • Compare the resourceVersion for an object retrieved at two different times to see if it is identical

Things you cannot do:

  • Assume the resourceVersion is numeric
  • Assume you can derive meaning from two different resourceVersions (e.g. "bigger number came later")
  • Compare resourceVersions between two resources, or between two resource types

So, can clients assume collection resourceVersion >= max(obj.resourceVersion for obj in collection)?

No

Can clients assume watching collection from max(obj.resourceVersion seen during previous watch) will yield same point watch stopped? (as long as that history is not lost...)

You should not take the max... that involves interpreting the resourceVersion. You should treat it as opaque and remember the most recent one received

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton maybe you can incorporate some of the below, or do we have a canonical place for this to be documented to link to?

@cben, answers below.

So, can clients assume collection resourceVersion >= max(obj.resourceVersion for obj in collection)?

Per collection, this happens to be true today. However the only operation on resourceVersions that we're promising will work in the indefinite future is that of equality (==). One can imagine data storage techniques where resourceVersions aren't linear or aren't numeric. Please don't bake assumptions about resourceVersions into clients, that's not future proof.

Can clients assume watching collection from max(obj.resourceVersion seen during previous watch) will yield same point watch stopped? (as long as that history is not lost...)

Do not use max; use last seen. Then, yes, that is the way it is intended to function.

Can clients take a collection version (from List) and use it on a single resource watch?

Allowed.

Can clients take version from one collection and use it on another collection?

NO. Empirically it works... until it doesn't. The cluster administrator can choose to move resource types between different backends, which would cause this to stop being true. The default setup for Events already does this.

Can clients do any kind of "greater than" comparisons between versions?

NO.

Or only know that last string seen, over single watch, is semantically latest?

Probably not safe depending on what you want to do, in that there could be things you haven't yet seen over the watch.

Copy link
Contributor

@cben cben Jan 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! <3

Things you can do:

  • Remember the last resourceVersion you received for a resource, and use it when re-establishing a watch for that resource

Right, for watch on single resource this was always clear.

Just to emphasize, I'm asking about watch of a collection, and restarting it from the last version of an individual resource within that collection — which is all you get as you watch.

Copy link

@Ladas Ladas Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lavalamp hello, a quick question: so today, we can say for EntityA, if it has bigger resourceVersion, it is newer (due to etcd's modifiedIndex).

So for the future, will there always be some way to say which version of the EntityA is newer? Because we should have such attribute. I would guess that many applications have this need.

Copy link
Member

@liggitt liggitt Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

today, we can say for EntityA, if it has bigger resourceVersion, it is newer (due to etcd's modifiedIndex).

that is not guaranteed by the API

will there always be some way to say which version of the EntityA is newer?

there is creationTimestamp on the object, but other than that, no.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt would it be possible to add something like that? It's quite common e.g. for public clouds, that entity has something comparable (e.g. updated_on timestamp)

So I wonder why we don't have it here? Is there something blocking it? Or is it just because nobody had the usecase?

Our usecase is we want to process the data in parallel, without something comparable, we are forced to do everything in 1 process (which is quite bad when we have envs with 100k of containers, or other entities)

But even for single process, if we combine the data from API and watches, the data from watches can temporarily give us old data, that we want to throw away, but we can't because we don't know what is newer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we combine the data from API and watches, the data from watches can temporarily give us old data, that we want to throw away, but we can't because we don't know what is newer.

combining data from two streams that differ in time seems problematic. Typically, controllers are driven by watch alone, and if updates to the API hit a conflict error, that means the resource was updated in the meantime, and the controller can simply wait for the next event to arrive via the watch stream

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to process the data in parallel, without something comparable, we are forced to do everything in 1 process

the workqueue used by most of the kubernetes controllers allows parallel processing of watch events... seeing how that is structured might be helpful (https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go#L131-L133)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt Thank you, I'll try to check it out, although my Go skills are not great :-)

In detail what we want to do:

We fetch and save k8s data for various purposes (reporting, chargeback, etc.) into our Postgre SQL DB.

Given we have e.g. 100k pods and the app was down for a while (therefore the watches history is missing), we'd like to fetch it and start watches at the same time. So we get fresh changes immediately, while we are getting the full inventory on the background (saving in all in parallel into our DB)

Fetching 100k pods and associated objects and storing them takes quite some time, so if we need to do this sequentially, the user needs to wait to get fresh changes (in bigger envs it's like 0.5h or more of processing time). Also the sequential processing needs more orchestration and that makes it more complex.

There is no way of saving the data in parallel, if we can't compare the version of entities. So what would you advise here? I am not sure if kubernetes controllers are solving the same issue as described ^ ?


So API and watches are taking the data from different sources? So if we would put e.g. the timestamp to k8s db for pods, the watches would not see it? Only the API query? Or what is the reason we can't have a comparable attribute?

@smarterclayton
Copy link
Contributor Author

I'll add some of these comments if @lavalamp promises to review afterwards so we can merge this :)

@lavalamp
Copy link
Member

lavalamp commented Jan 12, 2018 via email

Copy link
Contributor

@zacharysarah zacharysarah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so awesome. ✨ Your writing is clear throughout and represents some of the best in the repo. Minor fixes only--good work.

"items": [...]
}

2. Starting from resource version 10245, receive notifications of any creates, deletes, or updates as individual JSON objects
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a period at the end of the sentence.


For example:

1. List all of the pods in a given namespace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a period at the end of the sentence.

}
...

A given Kubernetes server will only preserve a historical list of changes for a limited time. On older clusters using etcd2 a maximum of 1000 changes will be preserved and on newer clusters using etcd3 changes in the last 5 minutes are preserved by default. Clients must handle the case where the requested watch operations fails because the historical version of that resource is not available by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic (in Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace this paragraph with:

A given Kubernetes server will only preserve a historical list of changes for a limited time. Older clusters using etcd2 preserve a maximum of 1000 changes. Newer clusters using etcd3 preserve changes in the last 5 minutes by default.  When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code `410 Gone`, clearing their local cache, performing a list operation, and starting the watch from the `resourceVersion` returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a `Reflector` and is located in the `k8s.io/client-go/cache` package.)

So:

A given Kubernetes server will only preserve a historical list of changes for a limited time. Older clusters using etcd2 preserve a maximum of 1000 changes. Newer clusters using etcd3 preserve changes in the last 5 minutes by default. When the requested watch operations fail because the historical version of that resource is not available, clients must handle the case by recognizing the status code 410 Gone, clearing their local cache, performing a list operation, and starting the watch from the resourceVersion returned by that new list operation. Most client libraries offer some form of standard tool for this logic. (In Go this is called a Reflector and is located in the k8s.io/client-go/cache package.)


Like a watch operation, a `continue` token will expire after a short amount of time (by default 5 minutes) and return a `410 Gone` if more results cannot be returned. In this case, the client will need to start from the beginning or omit the `limit` parameter.

For example, if there are 1253 pods on the cluster, and the client wants to receive chunks of 500 pods at a time, they would request those chunks as follows:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, if there are 1,253 pods on the cluster and the client wants to receive chunks of 500 pods at a time, they would request those chunks as follows:

So:

For example, if there are 1,253 pods on the cluster and the client wants to receive chunks of 500 pods at a time, they would request those chunks as follows:


For example, if there are 1253 pods on the cluster, and the client wants to receive chunks of 500 pods at a time, they would request those chunks as follows:

1. List all of the pods on a cluster, retrieving up to 500 pods each time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a period at the end of the sentence.


## Alternate representations of resources

By default Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotation - passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiny fix:

By default Kubernetes returns objects serialized to JSON with content type `application/json`. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotation: passing an `Accept` header with a `GET` call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a `PUT` or `POST` call takes the `Content-Type` header. The server will return a `Content-Type` header if the requested format is supported, or the `406 Not acceptable` error if an invalid content type is provided.

So:

By default Kubernetes returns objects serialized to JSON with content type application/json. This is the default serialization format for the API. However, clients may request the more efficient Protobuf representation of these objects for better performance at scale. The Kubernetes API implements standard HTTP content type negotation: passing an Accept header with a GET call will request that the server return objects in the provided content type, while sending an object in Protobuf to the server for a PUT or POST call takes the Content-Type header. The server will return a Content-Type header if the requested format is supported, or the 406 Not acceptable error if an invalid content type is provided.


For example:

1. List all of the pods on a cluster in Protobuf format
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a period at the end of the sentence.

Content-Type: application/vnd.kubernetes.protobuf
... binary encoded PodList object

2. Create a pod by sending Protobuf encoded data to the server, but request a response in JSON
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a period at the end of the sentence.

}
```

Clients that receive a response in `application/vnd.kubernetes.protobuf` that does not match the expected prefix should reject the response since
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clients that receive a response in `application/vnd.kubernetes.protobuf` that does not match the expected prefix should reject the response, as

So:

Clients that receive a response in application/vnd.kubernetes.protobuf that does not match the expected prefix should reject the response, as

```

Clients that receive a response in `application/vnd.kubernetes.protobuf` that does not match the expected prefix should reject the response since
future versions may need to alter the serialization format in an incompatible way, and will do so by changing the prefix.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

future versions may need to alter the serialization format in an incompatible way and will do so by changing the prefix.

So:

future versions may need to alter the serialization format in an incompatible way and will do so by changing the prefix.

@zacharysarah
Copy link
Contributor

@smarterclayton 👋 Bump for review feedback. Alternately, if you un-check and re-check the box to allow edits from maintainers, I'm happy to make the changes for you.

@smarterclayton
Copy link
Contributor Author

Allowed maintainers to edit.

Adding review feedback
@zacharysarah
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 25, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: smarterclayton, zacharysarah

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 25, 2018
@k8s-ci-robot k8s-ci-robot merged commit 282455f into kubernetes:master Jan 25, 2018
chenopis added a commit that referenced this pull request Jan 29, 2018
…henopis-user-journeys

* 'master' of https://github.com/kubernetes/website: (102 commits)
  Change deployment group (#7112)
  fix typos in extending doc (#7110)
  added installation via Powershell Gallery (#6086)
  Update StatefulSet API version to 1.9 for the Cassandra example (#7096)
  Modify the terms by document style (#7026)
  now that phase out in k8s/cluster/ directory, so remove relative docs (#6951)
  Update mysql-wordpress-persistent-volume.md (#7080)
  Update high-availability.md (#7086)
  Feature gates reference documentation (#6364)
  Add link to autoscaler FAQ (#7045)
  Replace regular characters with HTML entities. (#7038)
  Remove unnecessary manual node object creation (#6765)
  upper case restriction doesn't exist (#7003)
  Add an API concepts document and describe terminology and API chunking (#6540)
  Add kube-apiserver, kube-controller-manager, kube-scheduler and etcd to glossary. (#6600)
  Update what-is-kubernetes.md (#6971)
  Fixed the interacting with cluster section for the ubuntu installation (#6905)
  Update weave-network-policy.md (#6960)
  Added AWS eks (#6568)
  Update eviction strategy to include priority (#6949)
  ...

# Conflicts:
#	_data/setup.yml
#	_data/tutorials.yml
#	docs/imported/release/notes.md
bitfield pushed a commit to bitfield/website that referenced this pull request Feb 19, 2018
kubernetes#6540)

* Add an API concepts document and describe terminology and API chunking

Create a new API reference page that covers some high level topics
- terminology, paths, verbs, watching resources, chunking, and content
  type negotation.

* Update api-concepts.md

Adding review feedback
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants