You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work!
I would like to ask if this parallel implementation of vmap can only be used for simple mlp, and if I want to use something like instant-ngp is feasible.
Look forward to your reply!
The text was updated successfully, but these errors were encountered:
In principle, yes, but you need to modify the CUDA kernels for both the original instant NGP and torch-ngp.
The functorch library I used can automatically handle some parallel operations, and I believe the most common operations in pytorch can be paralleled easily with functorch. For some batched backward functions, you need to follow the instructions of functorch to implement.
Thank you for your excellent work!
I would like to ask if this parallel implementation of vmap can only be used for simple mlp, and if I want to use something like instant-ngp is feasible.
Look forward to your reply!
The text was updated successfully, but these errors were encountered: