You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all, thanks for the great work! Is there a way to run chromBPnet in mixed precision formats like BF16 and utilize tensor cores? I'm using an RTX 6000 and its tensor performance on paper is ~8 times faster than its single-precision performance. I'm curious to see if inference speed for slower functions (e.g. contribs_bw) would scale proportionally. Any help is appreciated!
The text was updated successfully, but these errors were encountered:
Ah that is good to know. We dont have this capability currently, contribs_bw is primarily using the deeplift algorithm, if deeplift is faster at lower precision, it can translate o this function being fast as well. Will take a note to explore this in the upcoming releases. Thank you!
Hi all, thanks for the great work! Is there a way to run chromBPnet in mixed precision formats like BF16 and utilize tensor cores? I'm using an RTX 6000 and its tensor performance on paper is ~8 times faster than its single-precision performance. I'm curious to see if inference speed for slower functions (e.g. contribs_bw) would scale proportionally. Any help is appreciated!
The text was updated successfully, but these errors were encountered: