Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A few bugs involved "tests.sh" #5

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

eustomaqua
Copy link

When I attempted to install this package following Installation instructions, a few bugs occurred. Here is how I fixed them.

Bugs occurred when running "tests.sh":

  • python ./agents/infectious_disease_agents_test.py
  • python ./examples/college_admission_util_test.py
  • python ./examples/college_admission_util_test.py

Solutions:

  • pip install gin-config==0.1.1
  • pip install mock
  • modified two files: "examples/college_admission_util_test.py" and "examples/config/college_admission_config.gin"

Detailed error information:

(1) "pip install gin-config==0.1.1" is for the test

$ python ./agents/infectious_disease_agents_test.py
Traceback (most recent call last):
  File "./agents/infectious_disease_agents_test.py", line 24, in <module>
    import core
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 31, in <module>
    import gin
ImportError: No module named 'gin'

(2) "pip install mock" is for the test:

$ python ./examples/college_admission_util_test.py
Traceback (most recent call last):
  File "./examples/college_admission_util_test.py", line 25, in <module>
    import mock
ImportError: No module named 'mock'

(3) modified two files for the test

$ python examples/college_admission_util_test.py
Running tests under Python 3.5.2: /home/byj/software/python35/bin/python3
[ RUN      ] CollegeAdmissionUtilTest.test_accuracy_nr_fn_returns_whether_predictions_correct
[       OK ] CollegeAdmissionUtilTest.test_accuracy_nr_fn_returns_whether_predictions_correct
[ RUN      ] CollegeAdmissionUtilTest.test_example_configuration_runs
[  FAILED  ] CollegeAdmissionUtilTest.test_example_configuration_runs
[ RUN      ] CollegeAdmissionUtilTest.test_social_burden_eligible_auditor_selects_eligible
[       OK ] CollegeAdmissionUtilTest.test_social_burden_eligible_auditor_selects_eligible
[ RUN      ] CollegeAdmissionUtilTest.test_stratify_by_group_returns_correct_groups
[       OK ] CollegeAdmissionUtilTest.test_stratify_by_group_returns_correct_groups
[ RUN      ] CollegeAdmissionUtilTest.test_stratify_to_one_group_stratifies_to_one_group
[       OK ] CollegeAdmissionUtilTest.test_stratify_to_one_group_stratifies_to_one_group
======================================================================
ERROR: test_example_configuration_runs (__main__.CollegeAdmissionUtilTest)
test_example_configuration_runs (__main__.CollegeAdmissionUtilTest)
Test the college admission runner end-to-end.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "examples/college_admission_util_test.py", line 78, in test_example_configuration_runs
    'third_party/py/fairness_gym/examples/config/'
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1438, in parse_config_file
    raise IOError('Unable to open file: {}'.format(config_file))
OSError: Unable to open file: third_party/py/fairness_gym/examples/config/college_admission_config.gin

----------------------------------------------------------------------
Ran 5 tests in 0.003s

FAILED (errors=1)

@eustomaqua
Copy link
Author

eustomaqua commented Dec 3, 2019

Besides, after I did that, there are still a few warnings due to the version of numpy. I left them for your information

$ python examples/college_admission_util_test.py
Running tests under Python 3.5.2: /home/yjbian/VirtualEnv/py36env/bin/python
[ RUN      ] CollegeAdmissionUtilTest.test_accuracy_nr_fn_returns_whether_predictions_correct
[       OK ] CollegeAdmissionUtilTest.test_accuracy_nr_fn_returns_whether_predictions_correct
[ RUN      ] CollegeAdmissionUtilTest.test_example_configuration_runs
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
  return f(*args, **kwds)
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216, got 192
  return f(*args, **kwds)
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
  return f(*args, **kwds)
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/home/yjbian/VirtualEnv/py36env/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216, got 192
  return f(*args, **kwds)
100%|█████████████████████████████████████████████████████████| 3000/3000 [00:02<00:00, 1322.07it/s]
[       OK ] CollegeAdmissionUtilTest.test_example_configuration_runs
[ RUN      ] CollegeAdmissionUtilTest.test_social_burden_eligible_auditor_selects_eligible
[       OK ] CollegeAdmissionUtilTest.test_social_burden_eligible_auditor_selects_eligible
[ RUN      ] CollegeAdmissionUtilTest.test_stratify_by_group_returns_correct_groups
[       OK ] CollegeAdmissionUtilTest.test_stratify_by_group_returns_correct_groups
[ RUN      ] CollegeAdmissionUtilTest.test_stratify_to_one_group_stratifies_to_one_group
[       OK ] CollegeAdmissionUtilTest.test_stratify_to_one_group_stratifies_to_one_group
----------------------------------------------------------------------
Ran 5 tests in 9.818s

OK

And there is one considerable way to ignore those warnings:

import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")

Link: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility

@eustomaqua
Copy link
Author

Specializing sklearn.version aims to avoid errors when running "tests.sh"

  • python ./agents/classifier_agents_test.py
  • python ./agents/college_admission_jury_test.py

like this:

$ python agents/college_admission_jury_test.py
Running tests under Python 3.5.2: /home/byj/software/python35/bin/python3
[ RUN      ] FixedJuryTest.test_agent_produces_different_epsilon_with_epsilon_greedy
[       OK ] FixedJuryTest.test_agent_produces_different_epsilon_with_epsilon_greedy
[ RUN      ] FixedJuryTest.test_agent_produces_zero_no_epsilon_greedy
[       OK ] FixedJuryTest.test_agent_produces_zero_no_epsilon_greedy
[ RUN      ] FixedJuryTest.test_agent_raises_episode_done_error
[       OK ] FixedJuryTest.test_agent_raises_episode_done_error
[ RUN      ] FixedJuryTest.test_agent_raises_invalid_observation_error
[       OK ] FixedJuryTest.test_agent_raises_invalid_observation_error
[ RUN      ] FixedJuryTest.test_epsilon_prob_decays_as_expected
[       OK ] FixedJuryTest.test_epsilon_prob_decays_as_expected
[ RUN      ] FixedJuryTest.test_fixed_agent_simulation_runs_successfully
Starting simulation
100%|█████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 1087.42it/s]
Measuring metrics
Starting simulation
100%|█████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 1090.02it/s]
Measuring metrics
[       OK ] FixedJuryTest.test_fixed_agent_simulation_runs_successfully
[ RUN      ] NaiveJuryTest.test_agent_returns_correct_threshold
Starting simulation
100%|███████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 29.14it/s]
Measuring metrics
Starting simulation
100%|███████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 29.01it/s]
Measuring metrics
[       OK ] NaiveJuryTest.test_agent_returns_correct_threshold
[ RUN      ] NaiveJuryTest.test_agent_returns_same_threshold_till_burnin_and_then_change
Starting simulation
  0%|                                                                        | 0/10 [00:00<?, ?it/s]
[  FAILED  ] NaiveJuryTest.test_agent_returns_same_threshold_till_burnin_and_then_change
[ RUN      ] NaiveJuryTest.test_agent_returns_same_threshold_till_burnin_learns_and_freezes
Starting simulation
  0%|                                                                        | 0/10 [00:00<?, ?it/s]
[  FAILED  ] NaiveJuryTest.test_agent_returns_same_threshold_till_burnin_learns_and_freezes
[ RUN      ] NaiveJuryTest.test_get_default_features_returns_same_features
[       OK ] NaiveJuryTest.test_get_default_features_returns_same_features
[ RUN      ] NaiveJuryTest.test_jury_successfully_initializes
[       OK ] NaiveJuryTest.test_jury_successfully_initializes
[ RUN      ] NaiveJuryTest.test_label_fn_returns_correct_labels
[       OK ] NaiveJuryTest.test_label_fn_returns_correct_labels
[ RUN      ] NaiveJuryTest.test_simple_classifier_simulation_runs_successfully
Starting simulation
  0%|                                                                        | 0/10 [00:00<?, ?it/s]
[  FAILED  ] NaiveJuryTest.test_simple_classifier_simulation_runs_successfully
[ RUN      ] RobustJuryTest.test_assertion_raised_when_burnin_less_than_2
[       OK ] RobustJuryTest.test_assertion_raised_when_burnin_less_than_2
[ RUN      ] RobustJuryTest.test_correct_max_score_change_calculated_no_subsidy
[       OK ] RobustJuryTest.test_correct_max_score_change_calculated_no_subsidy
[ RUN      ] RobustJuryTest.test_correct_max_score_change_calculated_with_subsidy
[       OK ] RobustJuryTest.test_correct_max_score_change_calculated_with_subsidy
[ RUN      ] RobustJuryTest.test_correct_robust_threshold_returned
[       OK ] RobustJuryTest.test_correct_robust_threshold_returned
[ RUN      ] RobustJuryTest.test_features_manipulated_to_maximum_limit_no_control_epsilon_greedy
[       OK ] RobustJuryTest.test_features_manipulated_to_maximum_limit_no_control_epsilon_greedy
[ RUN      ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_control_epsilon_greedy
[       OK ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_control_epsilon_greedy
[ RUN      ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_gaming_control
[       OK ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_gaming_control
[ RUN      ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_no_control
[  FAILED  ] RobustJuryTest.test_features_manipulated_to_maximum_limit_with_no_control
[ RUN      ] RobustJuryTest.test_robust_classifier_simulation_runs_successfully
Starting simulation
100%|███████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 28.60it/s]
Measuring metrics
Starting simulation
100%|███████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 28.64it/s]
Measuring metrics
[       OK ] RobustJuryTest.test_robust_classifier_simulation_runs_successfully
======================================================================
FAIL: test_agent_returns_same_threshold_till_burnin_and_then_change (__main__.NaiveJuryTest)
test_agent_returns_same_threshold_till_burnin_and_then_change (__main__.NaiveJuryTest)
Tests that agent returns same threshold till burnin without freezing.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/college_admission_jury_test.py", line 173, in test_agent_returns_same_threshold_till_burnin_and_then_change
    env=env, agent=agent, num_steps=10, stackelberg=True)
  File "/home/byj/GitHubLab/ml-fairness-gym/test_util.py", line 217, in run_test_simulation
    result = simulator(env, agent, metric, num_steps, seed=seed, agent_seed=seed)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/byj/GitHubLab/ml-fairness-gym/run_util.py", line 110, in run_stackelberg_simulation
    action = agent.act(observation, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 216, in _act_impl
    self._train_model()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 258, in _train_model
    cost_matrix=self._cost_matrix), _SCORE_MIN, _SCORE_MAX)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5
  In call to configurable 'run_stackelberg_simulation' (<function run_stackelberg_simulation at 0x7fa572c5cd08>)

======================================================================
FAIL: test_agent_returns_same_threshold_till_burnin_learns_and_freezes (__main__.NaiveJuryTest)
test_agent_returns_same_threshold_till_burnin_learns_and_freezes (__main__.NaiveJuryTest)
Tests that agent returns same threshold till burnin and freezes after.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/college_admission_jury_test.py", line 189, in test_agent_returns_same_threshold_till_burnin_learns_and_freezes
    env=env, agent=agent, num_steps=10, stackelberg=True)
  File "/home/byj/GitHubLab/ml-fairness-gym/test_util.py", line 217, in run_test_simulation
    result = simulator(env, agent, metric, num_steps, seed=seed, agent_seed=seed)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/byj/GitHubLab/ml-fairness-gym/run_util.py", line 110, in run_stackelberg_simulation
    action = agent.act(observation, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 216, in _act_impl
    self._train_model()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 258, in _train_model
    cost_matrix=self._cost_matrix), _SCORE_MIN, _SCORE_MAX)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5
  In call to configurable 'run_stackelberg_simulation' (<function run_stackelberg_simulation at 0x7fa572c5cd08>)

======================================================================
FAIL: test_simple_classifier_simulation_runs_successfully (__main__.NaiveJuryTest)
test_simple_classifier_simulation_runs_successfully (__main__.NaiveJuryTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/college_admission_jury_test.py", line 128, in test_simple_classifier_simulation_runs_successfully
    test_util.run_test_simulation(env=env, agent=agent, stackelberg=True)
  File "/home/byj/GitHubLab/ml-fairness-gym/test_util.py", line 217, in run_test_simulation
    result = simulator(env, agent, metric, num_steps, seed=seed, agent_seed=seed)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/byj/GitHubLab/ml-fairness-gym/run_util.py", line 110, in run_stackelberg_simulation
    action = agent.act(observation, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 216, in _act_impl
    self._train_model()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 258, in _train_model
    cost_matrix=self._cost_matrix), _SCORE_MIN, _SCORE_MAX)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5
  In call to configurable 'run_stackelberg_simulation' (<function run_stackelberg_simulation at 0x7fa572c5cd08>)

======================================================================
FAIL: test_features_manipulated_to_maximum_limit_with_no_control (__main__.RobustJuryTest)
test_features_manipulated_to_maximum_limit_with_no_control (__main__.RobustJuryTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/college_admission_jury_test.py", line 312, in test_features_manipulated_to_maximum_limit_with_no_control
    agent.act(observations, done=False)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 216, in _act_impl
    self._train_model()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/college_admission_jury.py", line 258, in _train_model
    cost_matrix=self._cost_matrix), _SCORE_MIN, _SCORE_MAX)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5

----------------------------------------------------------------------
Ran 22 tests in 1.863s

FAILED (failures=4)
$ python agents/classifier_agents_test.py
Running tests under Python 3.5.2: /home/byj/software/python35/bin/python3
[ RUN      ] ClassifierAgentTest.test_agent_trains
[       OK ] ClassifierAgentTest.test_agent_trains
[ RUN      ] ClassifierAgentTest.test_agent_trains_with_two_features
[       OK ] ClassifierAgentTest.test_agent_trains_with_two_features
[ RUN      ] ClassifierAgentTest.test_insufficient_burnin_raises
W1203 15:57:05.398771 140338255681280 classifier_agents.py:330] Could not fit the classifier at step 5. This may be because there is  not enough data. Consider using a longer burn-in period to ensure that sufficient data is collected. See the exception for more details on why it was raised.
[       OK ] ClassifierAgentTest.test_insufficient_burnin_raises
[ RUN      ] ClassifierAgentTest.test_interact_with_env_replicable
Starting simulation
  0%|                                                                        | 0/10 [00:00<?, ?it/s]
[  FAILED  ] ClassifierAgentTest.test_interact_with_env_replicable
[ RUN      ] ThresholdAgentTest.test_agent_can_learn_different_thresholds
/home/byj/software/python35/lib/python3.5/site-packages/sklearn/metrics/ranking.py:571: UndefinedMetricWarning: No positive samples in y_true, true positive value should be meaningless
  UndefinedMetricWarning)
[  FAILED  ] ThresholdAgentTest.test_agent_can_learn_different_thresholds
[ RUN      ] ThresholdAgentTest.test_agent_on_one_hot_vectors
[       OK ] ThresholdAgentTest.test_agent_on_one_hot_vectors
[ RUN      ] ThresholdAgentTest.test_agent_raises_with_improper_number_of_features
[       OK ] ThresholdAgentTest.test_agent_raises_with_improper_number_of_features
[ RUN      ] ThresholdAgentTest.test_agent_seed
[       OK ] ThresholdAgentTest.test_agent_seed
[ RUN      ] ThresholdAgentTest.test_agent_trains
[       OK ] ThresholdAgentTest.test_agent_trains
[ RUN      ] ThresholdAgentTest.test_freeze_after_burnin
[       OK ] ThresholdAgentTest.test_freeze_after_burnin
[ RUN      ] ThresholdAgentTest.test_frozen_classifier_never_trains
[       OK ] ThresholdAgentTest.test_frozen_classifier_never_trains
[ RUN      ] ThresholdAgentTest.test_interact_with_env_replicable
Starting simulation
100%|█████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 5686.42it/s]
Measuring metrics
Starting simulation
100%|█████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 5711.20it/s]
Measuring metrics
[       OK ] ThresholdAgentTest.test_interact_with_env_replicable
[ RUN      ] ThresholdAgentTest.test_one_hot_conversion
[       OK ] ThresholdAgentTest.test_one_hot_conversion
[ RUN      ] ThresholdAgentTest.test_skip_retraining_fn
[       OK ] ThresholdAgentTest.test_skip_retraining_fn
[ RUN      ] ThresholdAgentTest.test_threshold_history_is_recorded
/home/byj/software/python35/lib/python3.5/site-packages/sklearn/metrics/ranking.py:563: UndefinedMetricWarning: No negative samples in y_true, false positive value should be meaningless
  UndefinedMetricWarning)
[  FAILED  ] ThresholdAgentTest.test_threshold_history_is_recorded
[ RUN      ] TrainingCorpusTest.test_filter_unlabeled
[       OK ] TrainingCorpusTest.test_filter_unlabeled
[ RUN      ] TrainingCorpusTest.test_get_weights
[       OK ] TrainingCorpusTest.test_get_weights
[ RUN      ] TrainingCorpusTest.test_training_example_is_labeled_is_correct
[       OK ] TrainingCorpusTest.test_training_example_is_labeled_is_correct
======================================================================
FAIL: test_interact_with_env_replicable (__main__.ClassifierAgentTest)
test_interact_with_env_replicable (__main__.ClassifierAgentTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/classifier_agents_test.py", line 524, in test_interact_with_env_replicable
    test_util.run_test_simulation(env=env, agent=agent)
  File "/home/byj/GitHubLab/ml-fairness-gym/test_util.py", line 217, in run_test_simulation
    result = simulator(env, agent, metric, num_steps, seed=seed, agent_seed=seed)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/home/byj/software/python35/lib/python3.5/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/byj/GitHubLab/ml-fairness-gym/run_util.py", line 52, in run_simulation
    action = agent.act(observation, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 194, in _act_impl
    self._train()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 232, in _train
    self._set_thresholds(training_corpus)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 239, in _set_thresholds
    cost_matrix=self.params.cost_matrix)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5
  In call to configurable 'run_simulation' (<function run_simulation at 0x7fa2f165f598>)

======================================================================
FAIL: test_agent_can_learn_different_thresholds (__main__.ThresholdAgentTest)
test_agent_can_learn_different_thresholds (__main__.ThresholdAgentTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/classifier_agents_test.py", line 191, in test_agent_can_learn_different_thresholds
    done=False)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 194, in _act_impl
    self._train()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 232, in _train
    self._set_thresholds(training_corpus)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 239, in _set_thresholds
    cost_matrix=self.params.cost_matrix)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 0.5

======================================================================
FAIL: test_threshold_history_is_recorded (__main__.ThresholdAgentTest)
test_threshold_history_is_recorded (__main__.ThresholdAgentTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "agents/classifier_agents_test.py", line 304, in test_threshold_history_is_recorded
    agent.act(observation_space.sample(), False)
  File "/home/byj/GitHubLab/ml-fairness-gym/core.py", line 555, in act
    return self._act_impl(observation, reward, done)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 194, in _act_impl
    self._train()
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 232, in _train
    self._set_thresholds(training_corpus)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/classifier_agents.py", line 239, in _set_thresholds
    cost_matrix=self.params.cost_matrix)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 138, in single_threshold
    cost_matrix)["dummy"]
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 230, in equality_of_opportunity_thresholds
    options={"maxiter": 100})
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 783, in minimize_scalar
    return _minimize_scalar_bounded(fun, bounds, args, **options)
  File "/home/byj/software/python35/lib/python3.5/site-packages/scipy/optimize/optimize.py", line 1741, in _minimize_scalar_bounded
    fx = func(x, *args)
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 214, in negative_reward
    roc[group], tpr_target, rng=rng).iteritems():
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 107, in _threshold_from_tpr
    alpha = _interpolate(x=tpr_target, low=tpr_list[idx - 1], high=tpr_list[idx])
  File "/home/byj/GitHubLab/ml-fairness-gym/agents/threshold_policies.py", line 117, in _interpolate
    " %s") % (low, x, high)
AssertionError: x is not between [low, high]: Expected 1.0 <= 0.3819660112501051 <= 1.0

----------------------------------------------------------------------
Ran 18 tests in 0.976s

FAILED (failures=3)

@eustomaqua
Copy link
Author

For the simulation in "examples/docs/college_admission_example.md", note that the first one would cause errors, and that the second one would be fine.

$# python examples/college_admission_main.py --verbose=True --feature_mu='0.5','0.5' --output_dir='/home/byj/GitHubLab/ml-fairness-gym/kdd'
$ python examples/college_admission_main.py --verbose=True

Error information:

$ python examples/college_admission_main.py --verbose=True --feature_mu='0.5','0.5' --output_dir='/home/yjbian/GitHubLab/ml-fairness-gym/kdd'
Traceback (most recent call last):
  File "examples/college_admission_main.py", line 459, in <module>
    app.run(main)
  File "/home/yjbian/VirtualEnv/py36env/lib/python3.5/site-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/home/yjbian/VirtualEnv/py36env/lib/python3.5/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "examples/college_admission_main.py", line 451, in main
    results, thresholds = run_baseline_experiment(FLAGS.feature_mu)
  File "examples/college_admission_main.py", line 340, in run_baseline_experiment
    json_dump = college_experiment.run_experiment()
  File "/home/yjbian/GitHubLab/ml-fairness-gym/examples/college_admission.py", line 174, in run_experiment
    env, agent = self.build_scenario()
  File "/home/yjbian/GitHubLab/ml-fairness-gym/examples/college_admission.py", line 114, in build_scenario
    env = college_admission.CollegeAdmissionsEnv(user_params=self.env_config)
  File "/home/yjbian/VirtualEnv/py36env/lib/python3.5/site-packages/gin/config.py", line 1032, in wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/yjbian/VirtualEnv/py36env/lib/python3.5/site-packages/gin/utils.py", line 48, in augment_exception_message_and_reraise
    six.raise_from(proxy.with_traceback(exception.__traceback__), None)
  File "<string>", line 3, in raise_from
  File "/home/yjbian/VirtualEnv/py36env/lib/python3.5/site-packages/gin/config.py", line 1009, in wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/yjbian/GitHubLab/ml-fairness-gym/environments/college_admission.py", line 202, in __init__
    self._state_init()
  File "/home/yjbian/GitHubLab/ml-fairness-gym/environments/college_admission.py", line 208, in _state_init
    true_thresholds=self._get_true_thresholds(),
  File "/home/yjbian/GitHubLab/ml-fairness-gym/environments/college_admission.py", line 220, in _get_true_thresholds
    for group_id in range(2)
  File "/home/yjbian/GitHubLab/ml-fairness-gym/environments/college_admission.py", line 220, in <dictcomp>
    for group_id in range(2)
TypeError: can't multiply sequence by non-int of type 'float'
  In call to configurable 'CollegeAdmissionsEnv' (<function CollegeAdmissionsEnv.__init__ at 0x7f01e3abd620>)
ml-fairness-gym$

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant