You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does the micro-sam trainer allow for distributing models across multiple GPUs? In other words, does the torch_em default trainer use DataDistributedParallel?
The text was updated successfully, but these errors were encountered:
This is more of a question than an issue.
Does the micro-sam trainer allow for distributing models across multiple GPUs? In other words, does the torch_em default trainer use DataDistributedParallel?
The text was updated successfully, but these errors were encountered: