We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tried to connect NeMo with Microsoft Azure like in this issue explained.
My config.yml is like this:
type: main engine: azure model: gpt-4 parameters: azure_endpoint: https://*******/ api_version: 2023-07-01-preview deployment_name: gpt-4-0613-preview api_key: ************************
The Python Code I use is looking like this:
`import os from nemoguardrails import LLMRails, RailsConfig
os.environ["OPENAI_API_KEY"] = "****"
def read_file(path): with open(path, "r") as f: return f.read()
colang_content = read_file("T2000/config/topics.co")
yaml_content = read_file("T2000/config/config.yml")
config = RailsConfig.from_content( yaml_content=yaml_content, colang_content=colang_content )
rails = LLMRails(config)
options = { "output_vars": ["triggered_input_rail"], "log": { "activated_rails": True } }
def generate(prompt): return rails.generate(prompt, options=options)
print("\033c") while True: prompt = input("Input: ") print(generate(prompt)) print("\n") `
Can someone help me? The script keeps running but I get the following error printed as a string to the console:
Input: hallo Parameter temperature does not exist for NoneType Error while execution generate_user_intent: 'NoneType' object has no attribute 'agenerate_prompt' response="I'm sorry, an internal error has occurred." llm_output=None output_data={'triggered_input_rail': None} log=GenerationLog(activated_rails=[ActivatedRail(type='dialog', name='generate user intent', decisions=['execute generate_user_intent'], executed_actions=[ExecutedAction(action_name='generate_user_intent', action_params={}, return_value=None, llm_calls=[], started_at=1718713098.935654, finished_at=1718713098.947871, duration=0.012217044830322266)], stop=False, additional_info=None, started_at=1718713098.9356449, finished_at=1718713098.947905, duration=0.012260198593139648)], stats=GenerationStats(input_rails_duration=None, dialog_rails_duration=0.012260198593139648, generation_rails_duration=None, output_rails_duration=None, total_duration=0.014315128326416016, llm_calls_duration=0, llm_calls_count=0, llm_calls_total_prompt_tokens=0, llm_calls_total_completion_tokens=0, llm_calls_total_tokens=0), llm_calls=None, internal_events=None, colang_history=None) state=None
The text was updated successfully, but these errors were encountered:
Hi @DrmedAllel, are you still facing this issue? This might help in resolving it.
Sorry, something went wrong.
No branches or pull requests
Tried to connect NeMo with Microsoft Azure like in this issue explained.
My config.yml is like this:
type: main engine: azure model: gpt-4 parameters: azure_endpoint: https://*******/ api_version: 2023-07-01-preview deployment_name: gpt-4-0613-preview api_key: ************************
The Python Code I use is looking like this:
`import os
from nemoguardrails import LLMRails, RailsConfig
os.environ["OPENAI_API_KEY"] = "****"
def read_file(path):
with open(path, "r") as f:
return f.read()
colang_content = read_file("T2000/config/topics.co")
yaml_content = read_file("T2000/config/config.yml")
initialize rails config
config = RailsConfig.from_content(
yaml_content=yaml_content,
colang_content=colang_content
)
create rails
rails = LLMRails(config)
options = {
"output_vars": ["triggered_input_rail"],
"log": {
"activated_rails": True
}
}
def generate(prompt):
return rails.generate(prompt, options=options)
print("\033c")
while True:
prompt = input("Input: ")
print(generate(prompt))
print("\n")
`
Can someone help me?
The script keeps running but I get the following error printed as a string to the console:
Input: hallo Parameter temperature does not exist for NoneType Error while execution generate_user_intent: 'NoneType' object has no attribute 'agenerate_prompt' response="I'm sorry, an internal error has occurred." llm_output=None output_data={'triggered_input_rail': None} log=GenerationLog(activated_rails=[ActivatedRail(type='dialog', name='generate user intent', decisions=['execute generate_user_intent'], executed_actions=[ExecutedAction(action_name='generate_user_intent', action_params={}, return_value=None, llm_calls=[], started_at=1718713098.935654, finished_at=1718713098.947871, duration=0.012217044830322266)], stop=False, additional_info=None, started_at=1718713098.9356449, finished_at=1718713098.947905, duration=0.012260198593139648)], stats=GenerationStats(input_rails_duration=None, dialog_rails_duration=0.012260198593139648, generation_rails_duration=None, output_rails_duration=None, total_duration=0.014315128326416016, llm_calls_duration=0, llm_calls_count=0, llm_calls_total_prompt_tokens=0, llm_calls_total_completion_tokens=0, llm_calls_total_tokens=0), llm_calls=None, internal_events=None, colang_history=None) state=None
The text was updated successfully, but these errors were encountered: