Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'list' object has no attribute 'sizes' #141

Closed
gitKincses opened this issue Jul 19, 2024 · 2 comments
Closed

AttributeError: 'list' object has no attribute 'sizes' #141

gitKincses opened this issue Jul 19, 2024 · 2 comments

Comments

@gitKincses
Copy link

Hi! I am training the model on a DALES dataset, using the command "python src/train.py experiment=semantic/dales". After the dataset preprocessing is done, the program prints this. Could you tell me how can I fix this error?

Non-trainable params: 0                                                                                                                                                               
Total params: 212 K                                                                                                                                                                   
Total estimated model params size (MB): 0                                                                                                                                             

[2024-07-19 14:10:29,446][src.utils.utils][ERROR] - 
Traceback (most recent call last):
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/utils/utils.py", line 45, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "src/train.py", line 115, in train
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
    call._call_and_handle_interrupt(
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
    results = self._run_stage()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1028, in _run_stage
    self._run_sanity_check()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1057, in _run_sanity_check
    val_loop.run()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 370, in _evaluation_step
    batch = call._call_strategy_hook(trainer, "batch_to_device", batch, dataloader_idx=dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 277, in batch_to_device
    return model._apply_batch_transfer_handler(batch, device=device, dataloader_idx=dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 359, in _apply_batch_transfer_handler
    batch = self._call_batch_hook("on_after_batch_transfer", batch, dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 347, in _call_batch_hook
    return trainer_method(trainer, hook_name, *args)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 181, in _call_lightning_datamodule_hook
    return fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/datamodules/base.py", line 333, in on_after_batch_transfer
    return on_device_transform(nag)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/torch_geometric/transforms/compose.py", line 24, in __call__
    data = transform(data)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/transforms/transforms.py", line 23, in __call__
    return self._process(x)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/transforms/graph.py", line 1359, in _process
    nag[i_level].node_size = nag.get_sub_size(i_level, low=self.low)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/data/nag.py", line 54, in get_sub_size
    sub_sizes = self[low + 1].sub.sizes
AttributeError: 'list' object has no attribute 'sizes'
[2024-07-19 14:10:29,460][src.utils.utils][INFO] - Closing loggers...
[2024-07-19 14:10:29,463][src.utils.utils][INFO] - Closing wandb!
wandb:                                                                                
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /home/hqu/ljc/SPT/superpoint_transformer/logs/train/runs/2024-07-19_14-09-46/wandb/offline-run-20240719_140955-3dwpno44
wandb: Find logs at: ./logs/train/runs/2024-07-19_14-09-46/wandb/offline-run-20240719_140955-3dwpno44/logs
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
Error executing job with overrides: ['experiment=semantic/dales']
Traceback (most recent call last):
  File "src/train.py", line 140, in main
    metric_dict, _ = train(cfg)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/utils/utils.py", line 48, in wrap
    raise ex
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/utils/utils.py", line 45, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "src/train.py", line 115, in train
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=cfg.get("ckpt_path"))
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
    call._call_and_handle_interrupt(
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
    results = self._run_stage()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1028, in _run_stage
    self._run_sanity_check()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1057, in _run_sanity_check
    val_loop.run()
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
    return loop_run(self, *args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 135, in run
    self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 370, in _evaluation_step
    batch = call._call_strategy_hook(trainer, "batch_to_device", batch, dataloader_idx=dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 277, in batch_to_device
    return model._apply_batch_transfer_handler(batch, device=device, dataloader_idx=dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 359, in _apply_batch_transfer_handler
    batch = self._call_batch_hook("on_after_batch_transfer", batch, dataloader_idx)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 347, in _call_batch_hook
    return trainer_method(trainer, hook_name, *args)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 181, in _call_lightning_datamodule_hook
    return fn(*args, **kwargs)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/datamodules/base.py", line 333, in on_after_batch_transfer
    return on_device_transform(nag)
  File "/home/hqu/anaconda3/envs/spt/lib/python3.8/site-packages/torch_geometric/transforms/compose.py", line 24, in __call__
    data = transform(data)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/transforms/transforms.py", line 23, in __call__
    return self._process(x)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/transforms/graph.py", line 1359, in _process
    nag[i_level].node_size = nag.get_sub_size(i_level, low=self.low)
  File "/home/hqu/ljc/SPT/superpoint_transformer/src/data/nag.py", line 54, in get_sub_size
    sub_sizes = self[low + 1].sub.sizes
AttributeError: 'list' object has no attribute 'sizes'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Thanks!

@gitKincses
Copy link
Author

I found a same issue( #139 ), and it has been solved.

@drprojects
Copy link
Owner

Hi @gitKincses indeed this seems related to #139. I will look into this issue today and update the codebase accordingly.

PS: If you ❤️ or even simply use this project, don't forget to give the repository a ⭐, it means a lot to us !

drprojects added a commit that referenced this issue Jul 19, 2024
drprojects added a commit to vschelbi/superpoint_transformer_vschelbi that referenced this issue Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants