-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: create initial Integration tests #4895
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Various clarifications and changes suggestions requested, as well as a few suggestions for now and the future.
As general comments from offline discussion:
- Relying on an existing setup from the environment is fine for a first pass to show the functionality, but highly brittle for actually running integration tests.
- We should add in mechanisms to isolate test cases so they aren't accidental interactions between them, e.g. leaving a model loaded.
- We should be clear about whether we expect the test to run in k8s or local Docker/Compose rather than trying to infer this too.
- Using hard-coded endpoints is also awkward for running different test cases concurrently.
- It'd be nice in the future to separate the definition of test scenarios from the runner code, if possible. This would make it easier for non-Go devs to write and understand test cases.
flags.BoolP(flagShowRequest, "r", false, "show request") | ||
flags.BoolP(flagShowResponse, "o", true, "show response") | ||
flags.Int64P(flagTimeout, "t", flagTimeoutDefault, "timeout seconds") | ||
flags.BoolP(flagVerbose, "v", false, "verbose output") | ||
flags.String(flagSchedulerHost, env.GetString(envScheduler, defaultSchedulerHost), helpSchedulerHost) | ||
flags.String(flagAuthority, "", helpAuthority) | ||
flags.StringP(flagWaitCondition, "w", "", "model wait condition") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💭 Just noticed this - experiment_status
has wait as a Boolean, whereas here the user is expected to know what condition to apply. For pipelines I can understand that there are two underlying conditions (pipeline infra + models), but here it's slightly awkward, no?
Not directly related to this PR, but thought I'd raise anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes could change or consolidate in a future PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving unresolved to track
) | ||
|
||
func TestSingleModelLoadInferUnload(t *testing.T) { | ||
g := NewGomegaWithT(t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💭 Not a change for this PR, but I'd have a preference for using testify
for assertions instead of Gomega as it's lighter, simpler, and less verbose. One to discuss offline perhaps
sapi, err := resources.NewSeldonBackendAPI() | ||
g.Expect(err).To(BeNil()) | ||
for _, test := range tests { | ||
t.Run(test.name, func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💭 This could be more nicely expressed as multiple subtests with t.Run()
to capture the load, infer, and unload parts. That way, if something fails then it's immediately clear which bit failed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe future PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure - it's an easy and small enough change to leave. Leaving comment unresolved as a reminder
tests/integration/pkg/tests/testdata/tensorflow-tfsimple-request.json
Outdated
Show resolved
Hide resolved
Review fixes and also added build target for integration so just |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor change left, but approving as this does not affect functionality if not done -- just confusing/surprising to see
Initial integration tests PoC
Notes:
googleapis/gnostic
via 3rd party dependency and we use more recentgoogle/gnostic
. As both define protos fails by default due to proto clash. This can be overridden but for not left the ability to create Seldon via Compose as part of tests as not done and assume its already installed by some previous method.