Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help? Pls #901

Open
LazySloth26 opened this issue Apr 23, 2024 · 2 comments
Open

Help? Pls #901

LazySloth26 opened this issue Apr 23, 2024 · 2 comments

Comments

@LazySloth26
Copy link

enumerate(dataloader) is in cpu but idk how to fix that, i tried to comment it for it to run but now is failing during the for loop. it says TypeError: 'NoneType' object is not iterable

im running it on pycharm and i lack experience in this.

progress_bar = tqdm(
        # enumerate(dataloader),
        desc=f"Training Epoch {epoch}",
        total=len(dataloader),
        disable=disable_progress_bar
    )

    print(progress_bar[1], '1')

    for batch, (X, y) in progress_bar:
        # Send data to target device
        X, y = X.to(device), y.to(device)

        # 1. Forward pass
        y_pred = model(X)
@IlanVinograd
Copy link

Hi man ✋ , try uncomment the enumerate(dataloader) like here and If your model and data are on the GPU, you should move your model and data to the GPU before iterating through the dataloader. You can do this by sending both the model and the data to the device using .to(device).

progress_bar = tqdm(
    enumerate(dataloader),
    desc=f"Training Epoch {epoch}",
    total=len(dataloader),
    disable=disable_progress_bar
)

for batch_idx, (X, y) in progress_bar:
    # Send data to target device
    X, y = X.to(device), y.to(device)

    # 1. Forward pass
    y_pred = model(X)

Make sure device is properly defined and refers to the GPU (e.g., device = torch.device("cuda:0").

please let me know if it helps you 😄

@mrdbourke
Copy link
Owner

Hi @IlanVinograd , thank you for helping out!

@LazySloth26 how did you go with your issue? Did you manage to get it fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants