Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make wake-phase training faster #37

Merged
merged 5 commits into from
Jun 21, 2020
Merged

Make wake-phase training faster #37

merged 5 commits into from
Jun 21, 2020

Conversation

zhezhaozz
Copy link
Contributor

@zhezhaozz zhezhaozz commented Jun 19, 2020

This branch will focus on speed up the wake-phase training

closes #35

@zhezhaozz zhezhaozz linked an issue Jun 19, 2020 that may be closed by this pull request
@zhezhaozz zhezhaozz changed the title revise max function in simulated_dataset Make wake-phase training faster Jun 19, 2020
@zhezhaozz zhezhaozz self-assigned this Jun 19, 2020
@codecov
Copy link

codecov bot commented Jun 19, 2020

Codecov Report

Merging #37 into master will decrease coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #37      +/-   ##
==========================================
- Coverage   57.07%   57.04%   -0.03%     
==========================================
  Files          12       12              
  Lines        1591     1590       -1     
==========================================
- Hits          908      907       -1     
  Misses        683      683              
Impacted Files Coverage Δ
celeste/datasets/simulated_datasets.py 88.12% <100.00%> (-0.05%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 70b63da...8a38aa1. Read the comment docs.

@@ -152,7 +152,7 @@ def _check_sources_and_locs(locs, n_sources, batchsize):
assert locs.shape[2] == 2
assert len(n_sources) == batchsize
assert len(n_sources.shape) == 1
assert max(n_sources) <= locs.shape[1]
# assert max(n_sources) <= locs.shape[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please delete code rather than commenting it out

Copy link
Contributor

@jeff-regier jeff-regier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, please merge once the commented out code is deleted.

@ismael-mendoza
Copy link
Collaborator

It seems there is an error in circle.ci: https://app.circleci.com/pipelines/github/applied-bayes/celeste/405/workflows/0ad92f9f-5c98-430b-b7bc-5c9cfb670900/jobs/434/steps

Have you seen this before @zzhaozhe-profolio , it's hard for me to trace it back to your code.

@zhezhaozz
Copy link
Contributor Author

@jeff-regier Circle.ci couldn't finish the test now, I'm

It seems there is an error in circle.ci: https://app.circleci.com/pipelines/github/applied-bayes/celeste/405/workflows/0ad92f9f-5c98-430b-b7bc-5c9cfb670900/jobs/434/steps

Have you seen this before @zzhaozhe-profolio , it's hard for me to trace it back to your code.

This is really weird. It's my first time having this error. It also passed on GPU so not sure. Let me re-run the test.

@zhezhaozz
Copy link
Contributor Author

The current pytorch-lightning package has a bug: Lightning-AI/pytorch-lightning#2213. While waiting for new version to correct that, tried to bypass the bug within the test file.

@zhezhaozz zhezhaozz merged commit a6d695a into master Jun 21, 2020
@zhezhaozz zhezhaozz deleted the zz-speed-up-wake branch June 21, 2020 20:18
ismael-mendoza pushed a commit that referenced this pull request Jul 5, 2020
* revise max function in simulated_dataset

* try to fix error on circleci

* delete commented code

* adjust n_samples, passed on local CPU

* bypass the bug in the package, wait for new version
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

extreme runtime due to max
3 participants