Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Added support for account level governance of AI Monitoring #2326

Merged
merged 6 commits into from
Jul 9, 2024

Conversation

jsumners-nr
Copy link
Contributor

This PR resolves #2325.

@jsumners-nr jsumners-nr force-pushed the issue-2325 branch 5 times, most recently from 15e67a7 to baa3798 Compare July 3, 2024 16:19
@jsumners-nr jsumners-nr marked this pull request as ready for review July 8, 2024 12:19
@bizob2828 bizob2828 self-assigned this Jul 8, 2024
Copy link
Member

@bizob2828 bizob2828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested with openai completions but doesn't work with embeddings. I had a few comments about langchain and some tests

lib/instrumentation/openai.js Show resolved Hide resolved
* @param {string} params.type type of llm event(i.e.- LlmChatCompletionMessage, LlmTool, etc)
* @param {object} params.msg the llm event getting enqueued
* @param {string} params.pkgVersion version of langchain library instrumented
*/
common.recordEvent = function recordEvent({ agent, type, msg, pkgVersion }) {
common.recordEvent = function recordEvent({ agent, shim, type, msg, pkgVersion }) {
if (common.shouldSkipInstrumentation(agent.config) === true) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this behavior is different than the other libraries so when ai_monitoring.enabled is false it still creates the events in memory but doesn't do anything with them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are cases where it is creating LLM events like LlmErrorMessage. It's also doing a lot of parsing. I would change langchain instrumentation and check for this flag before it attempts to event create any llm events

test/unit/config/config-server-side.test.js Outdated Show resolved Hide resolved
Copy link
Member

@bizob2828 bizob2828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comments on location of config check to still create spans and end them but refrain from creating LLM events

lib/instrumentation/langchain/runnable.js Outdated Show resolved Hide resolved
lib/instrumentation/langchain/tools.js Outdated Show resolved Hide resolved
lib/instrumentation/langchain/vectorstore.js Outdated Show resolved Hide resolved
lib/instrumentation/openai.js Outdated Show resolved Hide resolved
bizob2828
bizob2828 previously approved these changes Jul 9, 2024
@jsumners-nr
Copy link
Contributor Author

I'm going to wait to merge this until the team is ready to ship the feature

@jsumners-nr jsumners-nr merged commit 7069335 into newrelic:main Jul 9, 2024
32 checks passed
@jsumners-nr jsumners-nr deleted the issue-2325 branch July 9, 2024 19:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Add support for account disablement of AI Monitoring
2 participants