-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch preset request: add access to c10::cuda::CUDACachingAllocator #1422
Comments
|
@HGuillemet Thanks for adding that so quickly. Do we a have a snapshot that I can use? I also have an additional question - I see that the version above is dated EDIT: I see that the preset actions failed for pyTorch. Maybe this is why I did not find a snapshot. TIA |
I don't think you can have snapshot before the PR is merged. You'll need to clone and compile yourself, or wait for the merge. I believe that snapshot creation are triggered after each commit in the main repository (or manually by the repo maintainer). @saudet Can you confirm ? |
@HGuillemet Ok, I will wait for the snapshot then. Need to ensure storch compiles with the new version. |
@HGuillemet I am using the 2.1 snapshot and can now see the The first is the call to: const DeviceStats stats =
c10::cuda::CUDACachingAllocator::getDeviceStats(device); This looks like a static call. However, in Java I seem to need something like: val devS = CUDAAllocator(Pointer()).getDeviceStats(1) The problem is that this fails with:
So, what pointer must I use? The other issue I have is when I try to implement: result["allocation"] = statArrayToDict(stats.allocation); with: val devAllocation: BoolPointer = devS.allocation() I was expecting an array of If you think it best, I can open another issue for this or use the discussion forum. TIA |
There is a static method is in torch_cuda: The mapping of Stat statArray = new Stat(devs.allocation()); You must see any instance of a subclass of
|
@HGuillemet Thank you very much. This is working. |
In the Scala Storch framework, we we are trying to determine memory usage to diagnose some issues. I would like to have an equivalent of Pytorch torch.cuda.memory_stats. It seems Libtorch has an equivalent of torch.cuda.memory_reserved().
Could we have these classes included in the JavaCPP Pytorch preset so that we can diagnose memory issues?
TIA
The text was updated successfully, but these errors were encountered: