Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hydra override error when running evaluations after non-editable installation #12

Open
ziyic7 opened this issue Oct 18, 2023 · 14 comments
Assignees
Labels
bug Something isn't working

Comments

@ziyic7
Copy link

ziyic7 commented Oct 18, 2023

Describe the bug
If I install DecodingTrust using the second method under '(Conda +) Pip' section, and then I run an evaluation using the provided script, I'll get an Hydra override error.

To Reproduce
Steps to reproduce the behavior:

  1. Install the DecodingTrust
    Note: there are two installation methods you can use to reproduce this error
  • a. Using the suggested method without editable mode
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
pip install .
  • b. Using the second method
conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
  1. Run the evaluation
dt-run +ood=knowledge_2020_5shot \
    ++model=openai/gpt-3.5-turbo-0301 \
    ++key=[MyOpenAIKey] \
    ++ood.out_file=data/ood/results/gpt-3.5-turbo-0301/knowledge_2020_5shot.json
  1. You should be able to see the error
omegaconf.errors.ValidationError: Invalid type assigned: str is not a subclass of OODConfig. value: knowledge_2020_5shot
    full_key: ood
    object_type=BaseConfig

Expected behavior
There shouldn't be any error in getting the output config. Specifically in my case, the key 'ood' in the output config should have a dict value acquired from the config file of which the name is the command line arg for 'ood'.

Screenshots

Environment:

  • Container
  • Conda Environment check above

Additional context

@ziyic7 ziyic7 changed the title Hydra override error when running evaluations after installation Hydra override error when running evaluations after non-editable installation Oct 20, 2023
@danielz02
Copy link
Member

Where did you run dt-run? From the error message, it seems that Hydra did not recognize the ood config group.

@ziyic7
Copy link
Author

ziyic7 commented Oct 25, 2023

Where did you run dt-run? From the error message, it seems that Hydra did not recognize the ood config group.

I ran dt-run in the repo's root dir. Sorry if I didn't make it clear. To take 1.b install as an example, after installing the dt, I cloned the repo, went to the root dir, and then I ran dt-run.

@danielz02 danielz02 self-assigned this Oct 31, 2023
@danielz02 danielz02 added the bug Something isn't working label Oct 31, 2023
@danielz02
Copy link
Member

Low priority - things are working when we are executing dt-run in the repository root.

@Arnold-Qixuan-Zhang
Copy link

Hi. I am having the same issue when running toxicity assessment. I tried to run dt-run +toxicity=realtoxicityprompts-toxic ++model=openai/gpt-3.5-turbo-0301 ++toxicity.n=25 ++toxicity.template=1 under .\DecodingTrust.

@jinz2014
Copy link

In the DecodingTrust directory,

dt-run +toxicity=realtoxicityprompts-toxic ++dry_run=True ++model=openai/gpt-3.5-turbo-0301 ++toxicity.n=25 ++toxicity.template=0
Error merging override +toxicity=realtoxicityprompts-toxic
Invalid type assigned: str is not a subclass of ToxicityConfig. value: realtoxicityprompts-toxic
full_key: toxicity
object_type=BaseConfig

@jinz2014
Copy link

@danielz02 You mentioned that you could run without errors. Do you think this is caused by our environment settings ?

@peter-peng-w
Copy link

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

@danielz02
Copy link
Member

danielz02 commented Feb 14, 2024

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

@peter-peng-w
Copy link

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

Awesome! Editable install solved this issue. Also looking forward to the new version!

@peter-peng-w
Copy link

Same issue here. I dt-run at the root directory but encountered the same error message. Any suggestion?

Hi, we are working on a new version that integrates everything more smoothly. Could you try editable install?

We also have a newer version in the release branch, and we plan to merge it to main this week.

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

@aU53r
Copy link

aU53r commented Feb 20, 2024

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

Hello, I'm not sure if it's correct, but I resolved this issue by modifying line 133 of configs.py to model_config: ModelConfig = ModelConfig()

@danielz02
Copy link
Member

Hi, it seems that the release branch has already been merged into the main branch. However, when I re-install and run commands such as dt-run ++model=openai/gpt-3.5-turbo-0301 ++dry_run=True ++key='' +fairness=zero_shot_br_0.0.yaml it throws the following error:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

when I print the config in main, it seems that the attribute model_config is un-defined.

{'model_config': '???', 'disable_sys_prompt': False, 'key': '', 'dry_run': True, 'advglue': None, 'adv_demonstration': None, 'fairness': {'data_dir': './data/fairness/fairness_data/', 'prompt_file': 'adult_0_200_test_base_rate_0.0.jsonl', 'gt_file': 'gt_labels_adult_0_200_test_base_rate_0.0.npy', 'sensitive_attr_file': 'sensitive_attr_adult_0_200_test_base_rate_0.0.npy', 'dataset': 'adult', 'out_file': './results/fairness/results/${model_config.model}/zero_shot_br_0.0.json', 'score_calculation_only': False, 'max_tokens': 20}, 'machine_ethics': None, 'ood': None, 'privacy': None, 'stereotype': None, 'toxicity': None, 'model': 'openai/gpt-3.5-turbo-0301'}

May I ask how to solve this issue?

Hello, I'm not sure if it's correct, but I resolved this issue by modifying line 133 of configs.py to model_config: ModelConfig = ModelConfig()

Hi! model_config should be the model names. We will update the documentation shortly.

@notrichardren
Copy link

notrichardren commented Mar 22, 2024

Hi! I'm still facing this same issue.

dt-run +key=<my openai api key> toxicity=realtoxicityprompts-toxic

returns:

omegaconf.errors.MissingMandatoryValue: Structured config of type `BaseConfig` has missing mandatory value: model_config
    full_key: model_config
    object_type=BaseConfig

@danielz02
Copy link
Member

Hi! Please use editable installation instead and see if the error persists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants