Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel compute #120

Merged
merged 25 commits into from
Nov 3, 2022
Merged

Kernel compute #120

merged 25 commits into from
Nov 3, 2022

Conversation

daniel-dodd
Copy link
Member

Pull request type

This PR will add infrastructure for memory efficient operations with Gram matrices and allow custom solves.

  • Bugfix
  • [X ] Feature
  • Code style update (formatting, renaming)
  • [X ] Refactoring (no functional changes, no api changes)
  • [ X] Build related changes
  • Documentation content changes
  • Other (please describe):

Thomas Pinder and others added 7 commits September 23, 2022 13:17
- Minimal Kronecker kernel. The inverse is highly inefficient - this can be improved!
- Added white noise kernel.
- Added diagonal computation abstraction.
- Added minimal computation abstraction for kernel combinations (i.e., for * and +).

Main issues:
- Though we can replace the computations in the gps.py, we might miss out with caching stuff e.g., the Cholesky decomposition in the dense computation case. Is there a sensible way around this?
- As a further point to the preceding point, we should not be computing the gram matrix in general (yes I known we need to in the dense setting).

- Think about computation savings for KL divergences - how can we resolve these?

Minor issues:
- Need to discuss the best way to add some computation methods - it might be tidier than just adding a load of static methods altogether.
- Need tests.
Need to add/update tests/ check code is correct.
@daniel-dodd daniel-dodd changed the base branch from master to v0.5_update October 10, 2022 17:19
@thomaspinder thomaspinder marked this pull request as ready for review October 10, 2022 18:05
thomaspinder and others added 11 commits October 12, 2022 11:38
Need to write complete tests for covariance operators, then can get on with revamping the rest of the codebase and tests for other modules.
Missing matmul, and need to improve the solve tests to ensure there are now issues with shapes.
@codecov
Copy link

codecov bot commented Nov 3, 2022

Codecov Report

Merging #120 (8b02848) into v0.5_update (269f670) will decrease coverage by 0.96%.
The diff coverage is 93.82%.

@@               Coverage Diff               @@
##           v0.5_update     #120      +/-   ##
===============================================
- Coverage        99.24%   98.27%   -0.97%     
===============================================
  Files               14       15       +1     
  Lines             1185     1333     +148     
===============================================
+ Hits              1176     1310     +134     
- Misses               9       23      +14     
Flag Coverage Δ
unittests 98.27% <93.82%> (-0.97%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
gpjax/utils.py 100.00% <ø> (ø)
gpjax/kernels.py 95.08% <82.60%> (-3.62%) ⬇️
gpjax/covariance_operator.py 94.33% <94.33%> (ø)
gpjax/gps.py 99.34% <96.87%> (-0.66%) ⬇️
gpjax/config.py 100.00% <100.00%> (ø)
gpjax/likelihoods.py 100.00% <100.00%> (ø)
gpjax/natural_gradients.py 100.00% <100.00%> (ø)
gpjax/parameters.py 95.65% <100.00%> (ø)
gpjax/variational_families.py 100.00% <100.00%> (ø)
gpjax/variational_inference.py 97.61% <100.00%> (+0.02%) ⬆️
... and 1 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@thomaspinder thomaspinder merged commit c94c347 into v0.5_update Nov 3, 2022
@thomaspinder thomaspinder deleted the kernel_compute branch November 3, 2022 21:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants