Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CSR scaling #848

Merged
merged 4 commits into from
Oct 7, 2021
Merged

Add CSR scaling #848

merged 4 commits into from
Oct 7, 2021

Conversation

fritzgoebel
Copy link
Collaborator

This PR adds scaling CSR matrices with a scalar as this is needed by opencarp. The scalar has to be a 1x1 Dense matrix, the usage is the same as for dense matrices.

@fritzgoebel fritzgoebel added is:new-feature A request or implementation of a feature that does not exist yet. type:matrix-format This is related to the Matrix formats 1:ST:ready-for-review This PR is ready for review mod:all This touches all Ginkgo modules. labels Jul 28, 2021
@fritzgoebel fritzgoebel requested a review from a team July 28, 2021 10:16
@fritzgoebel fritzgoebel self-assigned this Jul 28, 2021
@ginkgo-bot ginkgo-bot added reg:build This is related to the build system. reg:testing This is related to testing. labels Jul 28, 2021
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM from the implementation standpoint.
On the design level, I noticed we have a few different ways to do things:

  • Some operations have both member functions as well as LinOp apply representations (*_permute vs. Permutation->apply(...))
  • Some others have only member functions representing certain operations (Dense::scale could be Scalar->apply(Dense, Dense) or Diagonal->apply(Dense, Dense))
  • Finally some can only be represented via LinOp interfaces (SpGEMM, diagonal row scaling for Dense or any diagonal scaling for Csr)

So maybe we can investigate this design space some more before merging this interface? (Best after 1.4.0?)

Comment on lines 46 to 54
/*#include <ginkgo/core/base/array.hpp>
#include <ginkgo/core/base/math.hpp>
#include <ginkgo/core/matrix/coo.hpp>
#include <ginkgo/core/matrix/csr.hpp>
#include <ginkgo/core/matrix/diagonal.hpp>
#include <ginkgo/core/matrix/ell.hpp>
#include <ginkgo/core/matrix/hybrid.hpp>
#include <ginkgo/core/matrix/sellp.hpp>
#include <ginkgo/core/matrix/sparsity_csr.hpp>*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/*#include <ginkgo/core/base/array.hpp>
#include <ginkgo/core/base/math.hpp>
#include <ginkgo/core/matrix/coo.hpp>
#include <ginkgo/core/matrix/csr.hpp>
#include <ginkgo/core/matrix/diagonal.hpp>
#include <ginkgo/core/matrix/ell.hpp>
#include <ginkgo/core/matrix/hybrid.hpp>
#include <ginkgo/core/matrix/sellp.hpp>
#include <ginkgo/core/matrix/sparsity_csr.hpp>*/

Comment on lines 1044 to 1119
* @note Other implementations of dense should override this function
* instead of scale(const LinOp *alpha).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copypasta :)

Suggested change
* @note Other implementations of dense should override this function
* instead of scale(const LinOp *alpha).
* @note Other implementations of Csr should override this function
* instead of scale(const LinOp *alpha).

Comment on lines 1052 to 1127
* @note Other implementations of dense should override this function
* instead of inv_scale(const LinOp *alpha).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @note Other implementations of dense should override this function
* instead of inv_scale(const LinOp *alpha).
* @note Other implementations of Csr should override this function
* instead of inv_scale(const LinOp *alpha).

*/
void scale(const LinOp *alpha)
{
auto exec = this->get_executor();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can pull the 1x1 assertions out here as well, since this should hold also for all potential subclasses (even though deriving from Csr might not be the best idea, but still).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to have a Scalable mix-in for this?

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, with some minor suggestions.

BTW, in what way does opencarp require this? I'm guessing using the advanced apply with the included scaling is not enough for your case.

Also, if that is merged before #820, I will add an overload accepting a value_type later on, since this seems like a suitable place for that.

{
run_kernel(
exec,
[] GKO_KERNEL(auto nnz, auto alpha, auto x) { x[nnz] /= alpha[0]; },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the inverse inv=1/alpha[0] should probably be precomputed and used as x[nnz] *= inv. The compiler might do that automatically, but I think here it is very easy to help the compiler with that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precomputing the inverse should not matter for this memory bandwidth-bound kernel, so this should be okay.

core/matrix/csr.cpp Outdated Show resolved Hide resolved
core/matrix/csr.cpp Outdated Show resolved Hide resolved
test/matrix/CMakeLists.txt Outdated Show resolved Hide resolved
test/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I also need to bring the question again.
What should we use the generic kernel for?

@codecov
Copy link

codecov bot commented Jul 28, 2021

Codecov Report

Merging #848 (0639b85) into develop (f03b188) will decrease coverage by 0.01%.
The diff coverage is 90.76%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #848      +/-   ##
===========================================
- Coverage    94.73%   94.72%   -0.02%     
===========================================
  Files          429      430       +1     
  Lines        35298    35363      +65     
===========================================
+ Hits         33438    33496      +58     
- Misses        1860     1867       +7     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
include/ginkgo/core/matrix/csr.hpp 44.44% <75.00%> (+0.84%) ⬆️
common/unified/matrix/csr_kernels.cpp 100.00% <100.00%> (ø)
core/matrix/csr.cpp 98.80% <100.00%> (+0.03%) ⬆️
reference/matrix/csr_kernels.cpp 99.54% <100.00%> (+0.01%) ⬆️
reference/test/matrix/csr_kernels.cpp 99.78% <100.00%> (+<0.01%) ⬆️
test/matrix/csr_kernels.cpp 100.00% <100.00%> (ø)
core/base/extended_float.hpp 91.26% <0.00%> (-0.98%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f03b188...0639b85. Read the comment docs.

@upsj
Copy link
Member

upsj commented Jul 28, 2021

@yhmtsai You bring up an important question. The crucial points for me were

  • Pointwise and regular memory access
  • 1D or 2D indexing
  • simple local operations

IMO this fits the model quite well.

@fritzgoebel
Copy link
Collaborator Author

lgtm, with some minor suggestions.

BTW, in what way does opencarp require this? I'm guessing using the advanced apply with the included scaling is not enough for your case.

Also, if that is merged before #820, I will add an overload accepting a value_type later on, since this seems like a suitable place for that.

In some points in opencarp, an already assembled matrix is scaled by a constant and stored again. I guess in theory, it would be possible to use the advanced apply but this would mean restructuring this part of opencarp. I'd rather have this simple scaling available tbh.

@sonarcloud
Copy link

sonarcloud bot commented Jul 29, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 21 Code Smells

82.6% 82.6% Coverage
5.5% 5.5% Duplication

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, mostly. My main issue is related to the need for inv_scale right now, please see below.

{
run_kernel(
exec,
[] GKO_KERNEL(auto nnz, auto alpha, auto x) { x[nnz] /= alpha[0]; },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precomputing the inverse should not matter for this memory bandwidth-bound kernel, so this should be okay.

matrix::Csr<ValueType, IndexType> *to_scale)

#define GKO_DECLARE_CSR_INV_SCALE_KERNEL(ValueType, IndexType) \
void inv_scale(std::shared_ptr<const DefaultExecutor> exec, \
Copy link
Contributor

@Slaedr Slaedr Aug 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need a separate inv_scale operation? Isn't it easily done at the calling site by storing the inverse in another LinOp and calling scale? Or do you need both operations in the same code path? If that is the case, then this makes sense because we don't want to send another number over PCIe or something just for that. If you do not need both operations in the same run (with the same scalar alpha), I'd prefer removing inv_scale for now and only adding it when the need arises.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If consider dense functionalities, which contains the opposite operation, it might be okay at this moment.
it will depends on how the scalar generate, if it is always generated by user (alpha = initialize ...), it should be fine with only one scale function. otherwise, the scale will requires additional inverse kernel to behave the inv_scale

*/
void scale(const LinOp *alpha)
{
auto exec = this->get_executor();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to have a Scalable mix-in for this?

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. some nit

matrix::Csr<ValueType, IndexType> *to_scale)

#define GKO_DECLARE_CSR_INV_SCALE_KERNEL(ValueType, IndexType) \
void inv_scale(std::shared_ptr<const DefaultExecutor> exec, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If consider dense functionalities, which contains the opposite operation, it might be okay at this moment.
it will depends on how the scalar generate, if it is always generated by user (alpha = initialize ...), it should be fine with only one scale function. otherwise, the scale will requires additional inverse kernel to behave the inv_scale

Comment on lines 121 to 123
auto result = Mtx::create(ref);
result->copy_from(dx.get());
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto result = Mtx::create(ref);
result->copy_from(dx.get());
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);

GKO_ASSERT_MTX_NEAR should be able to take care of the memory

Comment on lines 134 to 136
auto result = Mtx::create(ref);
result->copy_from(dx.get());
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto result = Mtx::create(ref);
result->copy_from(dx.get());
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);
GKO_ASSERT_MTX_NEAR(result, x, r<vtype>::value);

@sonarcloud
Copy link

sonarcloud bot commented Sep 3, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot E 1 Security Hotspot
Code Smell A 21 Code Smells

82.6% 82.6% Coverage
6.4% 6.4% Duplication

fritzgoebel and others added 4 commits October 7, 2021 11:55
Co-authored-by: fritzgoebel <fritzgoebel@users.noreply.github.com>
@sonarcloud
Copy link

sonarcloud bot commented Oct 7, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 21 Code Smells

81.5% 81.5% Coverage
6.5% 6.5% Duplication

@fritzgoebel fritzgoebel merged commit d3deaf0 into develop Oct 7, 2021
@fritzgoebel fritzgoebel deleted the csr_scaling branch October 7, 2021 14:28
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-for-review This PR is ready for review is:new-feature A request or implementation of a feature that does not exist yet. mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants