Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langchain Quickstart notebook does not work #1179

Closed
mkhammoud opened this issue Jun 6, 2024 · 3 comments · Fixed by #1187
Closed

Langchain Quickstart notebook does not work #1179

mkhammoud opened this issue Jun 6, 2024 · 3 comments · Fixed by #1187
Labels
bug Something isn't working

Comments

@mkhammoud
Copy link

mkhammoud commented Jun 6, 2024

Bug Description
Whenever I try to run the langchain quick start, I get the following error: trulens_eval.feedback.feedback.InvalidSelector: Selector record.app.first.steps__.context.first.get_relevant_documents.rets does not exist in source data.

The RAG itself is perfectly working fine and I get the UI and I can see the evaluation of the relevance metric only. While nothing for context relevance or groundedness. Also in the UI I can see the retrieved context

To Reproduce
Running the code without any single change and I still get this error.

Expected behavior
Expecting to have groundedness and qs / context relevance results.

Relevant Logs/Tracebacks
Exception in thread Thread-6 (_future_target_wrapper):
Traceback (most recent call last):
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\feedback\feedback.py", line 1071, in _extract_selection
arg_vals[k] = list(q.get(source_data))
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\serial.py", line 1034, in get
for start_selection in start_items:
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\serial.py", line 1035, in get
for last_selection in last_step.get(start_selection):
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\serial.py", line 446, in get
raise KeyError(
KeyError: 'Key not in dictionary: get_relevant_documents'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1045, in _bootstrap_inner
self.run()
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\ipykernel\ipkernel.py", line 761, in run_closure
_threading_Thread_run(self)
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\python.py", line 475, in _future_target_wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\app.py", line 614, in _manage_pending_feedback_results
record.wait_for_feedback_results()
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\schema\record.py", line 186, in wait_for_feedback_results
feedback_result = future_result.result()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\python.py", line 475, in _future_target_wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\threading.py", line 174, in _run_with_timeout
raise e
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\utils\threading.py", line 159, in _run_with_timeout
res: T = fut.result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self.exception
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\tru.py", line 435, in run_and_call_callback
temp = ffunc.run(app=app, record=record)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\feedback\feedback.py", line 839, in run
raise e
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\feedback\feedback.py", line 825, in run
input_combinations = list(
^^^^^
File "C:\Users\mohkh\AppData\Local\Programs\Python\Python311\Lib\site-packages\trulens_eval\feedback\feedback.py", line 1073, in extract_selection
raise InvalidSelector(
trulens_eval.feedback.feedback.InvalidSelector: Selector record.app.first.steps
.context.first.get_relevant_documents.rets does not exist in source data.

Environment:

  • OS: windows
  • Python Version : 3.11.6
  • TruLens version: 0.30.1
  • Versions of Langchain: 0.2.2
@mkhammoud mkhammoud added the bug Something isn't working label Jun 6, 2024
Copy link

dosubot bot commented Jun 6, 2024

Hey @mkhammoud! 👋 I'm here to help you with any bugs, questions, or contributions you have in mind. Let's squash those bugs together!

To resolve the issue with the Langchain Quickstart notebook where you receive the error trulens_eval.feedback.feedback.InvalidSelector: Selector __record__.app.first.steps__.context.first.get_relevant_documents.rets does not exist in source data and do not get results for context relevance or groundedness, you need to ensure that the context selector is correctly specified. The error indicates that the specified selector does not match the structure of the recorded data.

Here is the corrected code snippet for initializing the feedback functions:

from trulens_eval.feedback.provider import OpenAI
from trulens_eval import Feedback
import numpy as np

# Initialize provider class
provider = OpenAI()

# select context to be used in feedback. the location of context is app specific.
from trulens_eval.app import App
context = App.select_context(rag_chain)

# Define a groundedness feedback function
f_groundedness = (
    Feedback(provider.groundedness_measure_with_cot_reasons)
    .on(context.collect()) # collect context chunks into a list
    .on_output()
)

# Question/answer relevance between overall question and answer.
f_answer_relevance = (
    Feedback(provider.relevance)
    .on_input_output()
)
# Question/statement relevance between question and each context chunk.
f_context_relevance = (
    Feedback(provider.context_relevance_with_cot_reasons)
    .on_input()
    .on(context)
    .aggregate(np.mean)
)

Ensure that the context selector is correctly defined and matches the structure of your rag_chain. If the structure of the chain or the context has changed, you may need to adjust the selector accordingly.

Additionally, you can use the check_selectors method to validate that the selectors are correct for the given app and record. This method will help you identify any issues with the selectors before running the feedback functions:

from trulens_eval.app import App
from trulens_eval.schema import record as mod_record_schema

# Assuming you have an instance of your app and a record
app_instance = ...  # Your app instance
record_instance = mod_record_schema.Record(...)  # Your record instance

# Check selectors
f_groundedness.check_selectors(app=app_instance, record=record_instance)
f_answer_relevance.check_selectors(app=app_instance, record=record_instance)
f_context_relevance.check_selectors(app=app_instance, record=record_instance)

This will help ensure that the selectors are valid and exist in the source data, preventing the InvalidSelector error.

Refer to the Select class in trulens_eval/schema/feedback.py for more details on how to create and use selectors. The Select class provides utilities for creating selectors using Lens and aliases/shortcuts, which can help you correctly specify the paths to the required data in your records [1][2][3].

To continue talking to Dosu, mention @dosu.

@sfc-gh-jreini sfc-gh-jreini linked a pull request Jun 9, 2024 that will close this issue
@sfc-gh-jreini
Copy link
Contributor

Solved by #1187 (merged to main). You can install it from github:

pip uninstall trulens_eval -y # to remove existing PyPI version
pip install git+https://github.com/truera/trulens#subdirectory=trulens_eval

@sfc-gh-jreini
Copy link
Contributor

Let me know if you have any issues with this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants