-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variance in scipy.special.lambertw
output depending on scipy version and installation source
#2000
Comments
Without looking at the test details, my gut feeling would be that the test case is much too close to the edge, so to speak, if the last decimal place matters. Can you move away from the edge? |
Feedback from scipy maintainers: this level of variance is expected across different scipy versions and distributions and is not something they view as a bug to be fixed. So we should either find a different magic input and/or adjust the test strategy to not rely on the full precision of scipy's output, as @adriesse suggests. |
Credits to @cwhanse for the following comment:
Locally it happens the same to me: no failures. However, on a remote Win11 machine, I have installed conda and what I find strange is that, compared to bare python, it is the only one that gives the following failure:
Conda and Bare python installed environment packages versions listed and outputs after running the Conda and then Bare (scipy1.13.0), then restarting the remote and only doing Bare(sicpy1.13.0), then checking again with Bare(scipy1.12.0). I think we can isolate this problem to only happen on conda, possibly due to some difference in their recipe/runner OS config in contrast to the scipy procedure. Anyways, that does not change the fact that these tests require some rework. And by the way, there are a few warnings too. IMO we should strive to have clean tests (edit: I'm referring to deprecationwarnings specially). Edit: output from the stated runs in the remote Win11 machine linked in this Mega folder. |
I don't think we want to do that. The point of that test is to confirm that small negative Voc is converted to 0, without large negative Voc being converted (as those indicate a problem with the inputs). See [these lines]( pvlib-python/pvlib/singlediode.py Line 786 in ccf74ea
I think we want a test that produces small negative Voc consistently, which is not easy to formulate. |
@cwhanse so I should find which range of inputs generate a |
That test was written to satisfy code coverage requirements after we decided to set small negative Voc to 0, since these Voc values can occur due to round-off error. Options:
Open to other ideas. |
I started working on no. 1, but I've realised we can just mock the output of _lambert_v_from_i for the v_oc. What do you think of the latter? No. 1 script
import numpy as np
from pvlib.singlediode import _lambertw_v_from_i
from functools import partial
test_premise = {
"photocurrent": 0.0,
"saturation_current": 1.480501e-11,
"resistance_series": 0.178,
"resistance_shunt": 8000.0,
"nNsVth": 1.797559,
}
variable_sweep = "saturation_current"
percentage_sweep = 0.5
bin_min, bin_max = -1e-12, 0.0
partial_kwargs = {k: v for k, v in test_premise.items() if k != variable_sweep}
voc_lambertw = partial(_lambertw_v_from_i, **partial_kwargs)
param_sweep = np.linspace(
test_premise[variable_sweep] * (1 - percentage_sweep),
test_premise[variable_sweep] * (1 + percentage_sweep),
10000,
)
voc_returned = voc_lambertw(current=0.0, **{variable_sweep: param_sweep})
param_min, param_max = param_sweep[
(voc_returned < bin_max) & (voc_returned > bin_min)
][(0, -1),]
print(
f"Input range of {variable_sweep} where _lambertw_v_from_i is between {bin_min} and {bin_max}: {param_min} to {param_max}"
)
fulfills_condition = param_min < test_premise[variable_sweep] < param_max
print(
f"Condition param_min < {variable_sweep} < param_max: {fulfills_condition}"
)
assert fulfills_condition I'll be trying this script on a Win11 x64 with conda. EDIT1: regarding the mocking, I don't expect it to create a problem in the long-run since what I understand is that the v_oc check is a fast path for the calculation. If I'm wrong and it is there to avoid problems, then we should choose another approach. |
I ended up doing the sixth option, mocking of Mind giving it a review? Also, thanks for all the insight regarding what the test was about. |
It seems like there's a working test solution already, but could the edge have been softened by using the range range (-1e-6, 0) rather than range (-1e-12, 0) ? |
Still curious, although this is now closed. |
Originally discovered in #1996 (comment)
It seems that
scipy.special.lambertw
can return slightly different values from 1.12.0, depending on how it is installed (with conda or from PyPI with pip). I observe this difference locally on Windows:As an aside,
mpmath
says that the one ending in8
is the more accurate of the two:The relevance is that the particular
argW
value above was (implicitly) used to generate negative values ofv_oc
using_lambertw_v_from_i
for testing intest_singlediode.py::test_singlediode_lambert_negative_voc
. The change from...8
to...7
means that the returned voltage is no longer negative and thus causes the test to fail.Other information:
scipy.special.lambertw
was translated from cython to C++ for 1.12.0 in scipy/scipy#19435.What should we do about it? I think the first step is to ask the scipy folks if the difference is considered a bug in scipy that they will address. I will ask the scipy maintainers about this. If they say it is not a bug, then I suppose we should identify alternative values of
argW
that result in negative Voc for both versions of the scipy calculation and edit our test.The text was updated successfully, but these errors were encountered: