Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added new API method withLlmCustomAttributes to run a function in a LLM context #2437

Merged
merged 39 commits into from
Aug 22, 2024

Conversation

MikeVaz
Copy link
Contributor

@MikeVaz MikeVaz commented Aug 2, 2024

Hi team. Here is our AIM's humble effort to implement adding LLM custom attributes

Description

During GA for the AIM product, custom attributes on the transaction level are prefixed with llm. were set up such that they would be added to all LLM events generated in that transaction from that point onwards, however, there has been demand for a more granular version of this.

This would be intended to support use cases such as A/B testing individual parts of flows (for example, the first call out to the LLM in a transaction might go to flow A, and the resulting second call in the transaction might go down flow B based on output from a tool).

Solution highlight

  • We went with an option 4 based on the DACI implementing withLlmCustomAttributes() method
  • In addition setLlmCustomAttributesCallback can be provided but removed from this PR for now.
  • We originally tried to use an existing context manager on the agent instance to run a LLM context. this.agent._contextManager.runInContext.
  • But the issue below made us move to using an isolated context manager on the transaction. transaction._llmContextManager = new AsyncLocalStorage();.
  • Provided attributes will be normalized and later extracted from the context to be added to the custom event
  • A new method withLlmCustomAttributes added to the APIs plus recordEvent method updated for openai, bedrock and langchain instrumentations.

How to Test

  • We have tested it in our sample application. We have a dedicated branch.
  • Changes are covered by unit and integration tests as well.

Related Issues

  • One issue is revealed when Langchain versioned test is failing. We would like to consult with the team on that. I suspect something in the langchain flow overrides the context viathis.agent._contextManager.setContext({....}). A solutions could be to go with a dedicated isolated context manager transaction._llmContextManager = new AsyncLocalStorage();.

Problem

Consider we have a flow like below

└  Start Transaction
     └ withLlmCustomAttributes {llm.attribute: 1}
         │- Open AI call #1
         │   └ recordEvent with llm context {llm.attribute:1}
         └ switch context via agent._contextManager.runInContext({custom: 1})
                └ Open AI call #2
                     └ recordEvent when llm context lost     

Solution 1 (currently implemented)

We created an isolated context manager so it would never collide with agent's one.

Solution 2

Alternatively we can refactor agent's context manager to store parent context in a WeakMap

    setParentContext(child, parent) {
        this._parentContexMap = this._parentContextMap || new WeaMap()
        this._parentContextMap.set(child, parent)
    }
    runInContext(context, callback, cbThis, args) {
         const toInvoke = cbThis ? callback.bind(cbThis) : callback
         
         // Something like
         this.setParentContext(context, this.getContext())
         
         if (args) {
           return this._asyncLocalStorage.run(context, toInvoke, ...args)
         }
     
         return this._asyncLocalStorage.run(context, toInvoke)
       }

Now we can traverse related contexts to aggregate all llm context attributes attached along the way.

@CLAassistant
Copy link

CLAassistant commented Aug 2, 2024

CLA assistant check
All committers have signed the CLA.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 3 committers have signed the CLA.

❌ mvazhenin
❌ MikeVaz
❌ RyanKadri


mvazhenin seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

api.js Outdated Show resolved Hide resolved
api.js Outdated Show resolved Hide resolved
lib/instrumentation/aws-sdk/v3/bedrock.js Outdated Show resolved Hide resolved
lib/instrumentation/langchain/common.js Outdated Show resolved Hide resolved
lib/instrumentation/openai.js Outdated Show resolved Hide resolved
package.json Outdated Show resolved Hide resolved
@bizob2828 bizob2828 changed the title feat: withLlmCustomAttributes feat: Added new API method withLlmCustomAttributes Aug 2, 2024
api.js Outdated Show resolved Hide resolved
api.js Outdated
@@ -1902,4 +1902,47 @@ API.prototype.ignoreApdex = function ignoreApdex() {
transaction.ignoreApdex = true
}

/**
* Runs a function synchronously within a provided LLM custom attributes context and returns its return value.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it has to be synchronously. The context manager should properly bind all functions run within the context

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went with how they describe it in the nodejs docs.

Runs a function synchronously within a context and returns its return value. The store is not accessible outside of the callback function. The store is accessible to any asynchronous operations created within the callback.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The store is accessible to any asynchronous operations created within the callback.

@MikeVaz
Copy link
Contributor Author

MikeVaz commented Aug 5, 2024

@bizob2828 @jsumners-nr I have updated my PR to address most of the feedback. I also got an idea why langchain was failing. Basically it would record events but only when virtual_llm = true and is_response = true events has the same context. The rest would somehow lose the context. Do you have ideas?

@MikeVaz MikeVaz marked this pull request as ready for review August 5, 2024 22:05
Copy link
Contributor

@jsumners-nr jsumners-nr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the latest changes, I am not seeing any test failures. What does a test look like that generates the issue you are seeing?

test/versioned/langchain/runnables.tap.js Outdated Show resolved Hide resolved
@MikeVaz
Copy link
Contributor Author

MikeVaz commented Aug 6, 2024

With the latest changes, I am not seeing any test failures. What does a test look like that generates the issue you are seeing?

It is not failing since I am filtering events

      const responses = events.filter((event) => {
          const [, chainEvent] = event
          return chainEvent.virtual_llm === true && chainEvent.is_response === true
        })

Removing filter would make test to fail

api.js Outdated Show resolved Hide resolved
api.js Outdated Show resolved Hide resolved
lib/util/llm-utils.js Outdated Show resolved Hide resolved
lib/util/llm-utils.js Outdated Show resolved Hide resolved
lib/util/llm-utils.js Outdated Show resolved Hide resolved
test/unit/util/llm-utils.test.js Show resolved Hide resolved
api.js Outdated Show resolved Hide resolved
test/versioned/openai/chat-completions.tap.js Outdated Show resolved Hide resolved
@MikeVaz MikeVaz requested a review from bizob2828 August 20, 2024 17:48
Copy link

codecov bot commented Aug 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.15%. Comparing base (0c2ee2f) to head (21e4b5f).
Report is 9 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2437      +/-   ##
==========================================
+ Coverage   96.20%   97.15%   +0.94%     
==========================================
  Files         288      289       +1     
  Lines       45305    45415     +110     
==========================================
+ Hits        43586    44123     +537     
+ Misses       1719     1292     -427     
Flag Coverage Δ
integration-tests-cjs-18.x 73.80% <39.17%> (?)
integration-tests-cjs-20.x 73.82% <39.17%> (?)
integration-tests-cjs-22.x 73.85% <39.17%> (?)
integration-tests-esm-18.x 49.15% <39.17%> (?)
integration-tests-esm-20.x 49.15% <39.17%> (?)
integration-tests-esm-22.x 49.18% <39.17%> (?)
unit-tests-18.x 88.67% <82.69%> (+<0.01%) ⬆️
unit-tests-20.x 88.67% <82.69%> (+<0.01%) ⬆️
unit-tests-22.x 88.68% <82.69%> (+<0.01%) ⬆️
versioned-tests-18.x 78.84% <88.46%> (-0.06%) ⬇️
versioned-tests-20.x 78.88% <88.46%> (-0.03%) ⬇️
versioned-tests-22.x 78.88% <88.46%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@bizob2828 bizob2828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feedback from the last round

lib/util/llm-utils.js Outdated Show resolved Hide resolved
lib/util/llm-utils.js Outdated Show resolved Hide resolved
lib/util/llm-utils.js Outdated Show resolved Hide resolved
lib/instrumentation/aws-sdk/v3/bedrock.js Outdated Show resolved Hide resolved
lib/instrumentation/langchain/common.js Outdated Show resolved Hide resolved
lib/instrumentation/openai.js Outdated Show resolved Hide resolved
MikeVaz and others added 5 commits August 21, 2024 10:02
Co-authored-by: Bob Evans <robert.evans25@gmail.com>
Co-authored-by: Bob Evans <robert.evans25@gmail.com>
Co-authored-by: Bob Evans <robert.evans25@gmail.com>
jsumners-nr
jsumners-nr previously approved these changes Aug 21, 2024
@bizob2828 bizob2828 changed the title feat: Added new API method withLlmCustomAttributes feat: Added new API method withLlmCustomAttributes to run a function in a LLM context Aug 22, 2024
@bizob2828 bizob2828 merged commit 57e6be9 into newrelic:main Aug 22, 2024
28 checks passed
@github-actions github-actions bot mentioned this pull request Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants