-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add scipy #648
Add scipy #648
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
build: | ||
number: 100 | ||
# We lack openblas on Windows, and therefore can't build scipy there either currently. | ||
skip: true # [win] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipping Windows for now as we don't have OpenBLAS over there.
26d772a
to
930014f
Compare
So, basically, we are getting 2 test failures if
Another concerning issue is I'm seeing a segmentation fault when running the tests locally. This would explain why there isn't more info in the log about what the cause of the failures are. Normally, |
FWIW, I tried testing the
|
Seems like the |
CircleCI passes. Travis CI is timing out because of the long test time. Though, we are confident this is building correctly. We could comment the tests on Travis CI. However, this could allow potentially bad binaries to ship when we convert this to a feedstock. Instead, I would much rather keep the tests in place and convert to a feedstock. If there are test failures, we can handle them there. This way we don't have to worry about bad binaries shipping in those cases. As Travis CI completed two builds in the time, we are also confident that the Travis CI build time should be sufficient for building single combinations of SciPy with given Python and NumPy. As this was reviewed some before and not having SciPy is causing serious issues in recipes and feedstocks, we are going to send this one through now. |
Feedback welcome at the feedstock. Sorry again for fast-tracking this. |
@pv @charris @matthew-brett @rgommers @stefanv @juliantaylor @alexbrc @ev-br, just letting you know that we have added |
Thanks for pinging @jakirkham. I'm definitely interested in (a) understanding the setup here, and (b) helping out. Main questions I have now:
|
Detail questions about
|
I can answer a few of those:
That is a workaround to ensure conda will pick this package instead of the default channel version. It is controversial and we don't like it. People are working on conda to improve its use with multiple channels so we can drop this.
That is a special comment called pre-processor. That ensure that the line will be present on Windows and ignored everywhere else. So the |
Thanks @ocefpaf, clear. |
Of course, @rgommers. That's great. Happy to help you understand what is going on over here. Also, happy to help get you access so you can help out.
On the Anaconda side, I'm not aware of a formal doc, but I can give you the lay of the land. Originally, everything on Mac was built against the Accelerate Framework. On Linux, ATLAS was used initially then there was a switch to OpenBLAS last year. On Windows, MKL has always been used. Recently, MKL was released to the As for conda-forge, @pelson and I sat down and came up with these ground rules for BLAS packaging. Though we may modify them in the future once there are some changes to
So, this is a really long conversation. 😄 The short answer is that NumPy and Python have special status in Generally how pinning of versions is done for ABI compatibility will likely change in the future. This is a product of the success of
It is not.
The main issue right now is we don't have a BLAS that we can use on Windows. Continuum uses MKL. While supporting MKL is of interest to us if possible, there are some legal and logistic hurdles we are going to have to go through. In the end, we still need to have an open source BLAS (preferably as the default BLAS). Our plan is to use OpenBLAS on all platforms. We do that on Mac and Linux now. However, we need a build of OpenBLAS on Windows to use it there. Some discussion has been occurring on this issue ( conda-forge/openblas-feedstock#2 ). There is some discussion on Windows intermingled with other points in the PR ( #364 ) that added OpenBLAS here. To get a reasonable build of OpenBLAS on Windows, I think the general consensus is to use the MinGW-w64 toolchain. There are a number of thoughts on how to do that exactly. This may take some time to get right. So I'm leaning towards using a repackaging of MSYS2's OpenBLAS to get something working with our stack here soon. Then we can go back and revisit how to build OpenBLAS from scratch if we want to.
It's a fair question. This basically had come from the recipe that @msarahan shared. So he may have some more thoughts on this. The main point of import tests is just to make sure we can import the various modules ok. Normally if these fail it is some sort of linking issue. While this can be caught by the normal test suite, the import tests are a bit faster and help localize the problem a bit better. Though they tend to be more useful when getting a recipe working and less useful after. Personally I have no strong feelings on them. So if you think they should be removed, I'm ok with that. |
So, We do have
Absolutely. We should use all optional dependencies that are involved in tests. We also have |
Clear. Since MPL is only used for one function and
Both are used, but only for a couple of functions ( |
No, I see what the idea is now - that's probably helpful and it can't really hurt. Better than getting the tests run with dozens of test errors. |
@jakirkham thanks for the long explanation on BLAS & Numpy, it's much clearer to me now. The invocation to get a particular BLAS variant and version is indeed not very user-friendly yet, hope that's going to improve in conda. But I guess most users will be OK with MKL Numpy from the defaults channel. |
Boy, that was long. Sorry about that.
Glad it is clearer at least.
So, the
We do too. We have been pushing on various points in Though there is a long history around features in general and the feeling at this point is they are broken and need fixing. While disheartening that they are a mess, it is encouraging that we are not arguing about whether they are a mess, but only how we should fix them. On that front, there are many promising ideas. |
Hmm, doesn't work for me on a freshly installed Miniconda 3.5:
|
Really? Works for me. Hmm... Could you please try updating |
Ah, found the issue - no 32-bit installers at https://anaconda.org/conda-forge/numpy |
I see. Yeah, we don't currently build 32-bit Linux. Though we could investigate doing it. Certainly have given it some thought before. Are you using 32-bit regularly? |
Yes, I'm working mostly on a 32-bit Linux desktop. It helps smoke out some issues that aren't caught otherwise (like issues due to |
So, I don't think we are against using 32-bit Linux to build things. The main problem is CircleCI doesn't have matrix support. As a result, we loop through build combos (Python versions, NumPy versions, any other parameters to loop) in the feedstock. The net result is some builds are very close to the CircleCI time limit. Adding 32-bit would put us over the limit in these cases. If there was a proper matrix support or longer build time limits, we would be much better equipped to pursue this problem. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also have found this strategy to be useful with scipy
. May be useful in construction of PR ( conda-forge/gsl-feedstock#18 ) for gsl
, @isuruf.
test -f "${PREFIX}/lib/libopenblas.a" && ln -fs "${PREFIX}/lib/libopenblas.a" "${PREFIX}/lib/libblas.a" | ||
test -f "${PREFIX}/lib/libopenblas.a" && ln -fs "${PREFIX}/lib/libopenblas.a" "${PREFIX}/lib/liblapack.a" | ||
test -f "${PREFIX}/lib/libopenblas.${DYLIB_EXT}" && ln -fs "${PREFIX}/lib/libopenblas.${DYLIB_EXT}" "${PREFIX}/lib/libblas.${DYLIB_EXT}" | ||
test -f "${PREFIX}/lib/libopenblas.${DYLIB_EXT}" && ln -fs "${PREFIX}/lib/libopenblas.${DYLIB_EXT}" "${PREFIX}/lib/liblapack.${DYLIB_EXT}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that we create symlinks to the standard blas
and lapack
library names before the build begins. The compilers are smart enough to follow them to their true location and thus are linked to libopenblas
directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rm -f "${PREFIX}/lib/libblas.a" | ||
rm -f "${PREFIX}/lib/liblapack.a" | ||
rm -f "${PREFIX}/lib/libblas.${DYLIB_EXT}" | ||
rm -f "${PREFIX}/lib/liblapack.${DYLIB_EXT}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We then clean them up after the build ends. This is done to ensure they don't get packaged. Plus it won't matter for anything built as they will already be linked against libopenblas
.
We've just got Scipy Windows wheels to build - scipy/scipy#7616 . Most credit goes to a lot of hard work from @xoviat - a tip of the hat from here. Build machinery is all on github / appveyor in case it's useful. |
Closes #516
Takes over the excellent work of @msarahan in this PR ( #516 ) so that we can get
scipy
into conda-forge fast.The reason this needs to happen fast is we are kind of in limbo with having
numpy
packaged and having a system for BLAS linkages, but not havingscipy
packaged, and we are still suffering from problems with themkl
package. As a result, many additions requiringnumpy
andscipy
are broken or in some other weird state (e.g. pinningmkl
when they are BLAS invariant).To be clear, I'm still open to comments and feedback. It just means that it has become clear that this is mission critical and most of this feedback will probably need to occur in the feedstock. Sorry in advance for any trouble this may cause.