Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added AIM support for Meta Llama3 models in AWS Bedrock #2306

Merged
merged 3 commits into from
Jun 27, 2024

Conversation

amychisholm03
Copy link
Contributor

  • Will now just check if the model name starts with meta-llama, not meta-llama2
  • Modified unit and versioned:major tests to test both llama2 and llama3, or just llama if applicable

@amychisholm03 amychisholm03 self-assigned this Jun 26, 2024
@amychisholm03 amychisholm03 linked an issue Jun 26, 2024 that may be closed by this pull request
jsumners-nr
jsumners-nr previously approved these changes Jun 27, 2024
Copy link
Contributor

@jsumners-nr jsumners-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great. Just have a question for clarification.

lib/llm-events/aws-bedrock/bedrock-command.js Show resolved Hide resolved
@bizob2828 bizob2828 changed the title chore: added llama3 support for AIM feat: Added AIM support for Meta Llama3 models in AWS Bedrock Jun 27, 2024
Copy link
Member

@bizob2828 bizob2828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few suggestions to reduce the amount of test code we'd have to maintain. the llama2/3 mocks are identical and prob unnecessary. you could consolidate it and update the versioned tests. I also plan to reach out and discuss some git commit tips as this branch has a lot of superfluous commits in it

@@ -119,6 +119,12 @@ function handler(req, res) {
break
}

case 'meta.llama3-8b-instruct-v1:0':
case 'meta.llama3-70b-instruct-v1:0': {
response = responses.llama3.get(payload.prompt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since llama3 responses are identical you don't really need mocks. you could just return the llama2 data

test/versioned/aws-sdk-v3/bedrock-chat-completions.tap.js Outdated Show resolved Hide resolved
Copy link
Contributor

@jsumners-nr jsumners-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does make sense to utilize the existing mocks. 👍

Copy link
Contributor

@jsumners-nr jsumners-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@jsumners-nr jsumners-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

Copy link
Member

@bizob2828 bizob2828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚢 🇮🇹

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Add Lllama3 model support for AIM
3 participants