Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement use argument for LLM for decorators #1234

Merged
merged 13 commits into from
May 1, 2023

Conversation

cw75
Copy link
Contributor

@cw75 cw75 commented Apr 24, 2023

Describe your changes and why you are making these changes

This PR implements the use argument that allows the user to specify the intent to use our LLM library. Under the hood, we include the use_llm flag into the resource struct and use this as a flag to select Docker images that have LLM Python library dependencies and Cuda drivers pre-installed.

Related issue number (if any)

Loom demo (if any)

Checklist before requesting a review

  • I have created a descriptive PR title. The PR title should complete the sentence "This PR...".
  • I have performed a self-review of my code.
  • I have included a small demo of the changes. For the UI, this would be a screenshot or a Loom video.
  • If this is a new feature, I have added unit tests and integration tests.
  • I have run the integration tests locally and they are passing.
  • I have run the linter script locally (See python3 scripts/run_linters.py -h for usage).
  • All features on the UI continue to work correctly.
  • Added one of the following CI labels:
    • run_integration_test: Runs integration tests
    • skip_integration_test: Skips integration tests (Should be used when changes are ONLY documentation/UI)

Copy link
Contributor

@hsubbaraj-spiral hsubbaraj-spiral left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few small nits, but otherwise looks great!

@@ -7,6 +7,8 @@
from aqueduct.flow import Flow
from aqueduct.schedule import DayOfMonth, DayOfWeek, Hour, Minute, daily, hourly, monthly, weekly

llm = "llm"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we declare this as a constant?

Copy link
Contributor Author

@cw75 cw75 May 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed this as it's not needed anymore.

@@ -47,6 +47,7 @@ type ResourceConfig struct {
MemoryMB *int `json:"memory_mb,omitempty"`
GPUResourceName *string `json:"gpu_resource_name,omitempty"`
CudaVersion *CudaVersionNumber `json:"cuda_version,omitempty"`
UseLlm *bool `json:"use_llm,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to UseLLM just to be consistent with our naming scheme.

@cw75 cw75 added the run_integration_test Triggers integration tests label May 1, 2023
@cw75 cw75 requested a review from hsubbaraj-spiral May 1, 2023 08:24
@cw75
Copy link
Contributor Author

cw75 commented May 1, 2023

@hsubbaraj-spiral updated the implementation to add the llm_op API shown in this demo. If you can take another look at the PR (especially llm_wrapper.py and decorator.py) that'd be great.

Copy link
Contributor

@hsubbaraj-spiral hsubbaraj-spiral left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, the new API makes sense to me, just a quick questions about the prompt.

if not isinstance(prompt, str):
raise ValueError("The 'prompt' parameter must be a string.")

if "{text}" not in prompt:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the significance of "{text}" is this a keyword in LLM prompts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"If the prompt contains {text}, we will replace {text} with the input string(s) before sending to the LLM. If the prompt does not contain {text}, we will prepend the prompt to the input string(s) before sending to the LLM."

@cw75 cw75 merged commit cc66b07 into main May 1, 2023
@cw75 cw75 deleted the eng-2818-implement-the-use-api-end-to-end branch May 1, 2023 17:50
agiron123 pushed a commit that referenced this pull request May 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run_integration_test Triggers integration tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants