Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

log_graph = True for tensorboard logger doesn't show model graph. #240

Closed
innat opened this issue Apr 19, 2022 · 10 comments
Closed

log_graph = True for tensorboard logger doesn't show model graph. #240

innat opened this issue Apr 19, 2022 · 10 comments
Assignees
Labels
Enhancement New feature or request Logger

Comments

@innat
Copy link
Contributor

innat commented Apr 19, 2022

Describe the bug
I wanted to show the computational graph of the patchcore model and for that I enable project.logger: tensorboard and set log_graph = True

logger = AnomalibTensorBoardLogger(
            name="Tensorboard Logs",
            save_dir=os.path.join(config.project.path, "logs"),
            log_graph=True
        )

Now, when I finished training, I run the tensorboard and under the GRAPHS tag, it showed nothing.

image

To Reproduce
Steps to reproduce the behavior:

  1. Go to the patchcore config file and set `logger: tensorboard'
  2. Next, working/anomaly_detection_engine/anomalib/anomalib/utils/loggers/__init__.py and set log_graph=True
  3. Run the anomalib with patchcore config
  4. After training, run the event log in tensorboard.

Expected behavior

Screenshots

  • (Mention above)

Hardware and Software Configuration

  • OS: [Ubuntu, OD] Ubuntu
  • PyTorch-Lighting: '1.5.9'
  • NVIDIA Driver Version [470.57.02]
  • CUDA Version [e.g. 11.4] 10.2
  • CUDNN Version [e.g. v11.4.120] 7605
  • OpenVINO Version [Optional e.g. v2021.4.2]

Additional context

  • I also found that the add_image function of the tensorboard callback, doesn't write any image on the tensorboard either.
  • Is there any way, I can write the Histogram in tensorboard in pytorch-lighting. Histograms are made for weights and bias matrices in the network. They tell us about the distribution of weights and biases among themselves. Is there any convenient way to add it to anomalib' models?
@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

@samet-akcay could you please take a look at this issue?
I tried this fix but ended up with this issue.

@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

FYI, also added self.example_input_array to PatchcoreModel inside the patchcore/torch_model.py.

Any pointer?

@ashwinvaidya17
Copy link
Collaborator

@innat The part about logging images is not documented well. To log images to tensorboard, you need to set log_images_to: ["tensorboard"] and logger: tensorboard, in the config file of the model.
For the second point, unless I am missing something you should ideally be able add anything to the tensorboard logger using the logger object.

@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

@ashwinvaidya17 thanks for the info. I've missed the log_images_to param in the config file, thought setting to logger:tensorboard is enough. I will try and report.

Regarding the model graph, can you please point out what's wrong with the above approach?

@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

@ashwinvaidya17 I tested with log_images_to, and it worked. It showed the model prediction. I think it's better to at least comment on the possible option. Like

log_image_to = ['local'] # ['local'], ['tensorboard']. ['wandb'] 

And I think it would be nice if we can show models layers activation maps. For example: in patch core, the backbone is wide-resnet with layers 2 and 3. How about showcasing their activation maps.

@samet-akcay
Copy link
Contributor

@ashwinvaidya17 I tested with log_images_to, and it worked. It showed the model prediction. I think it's better to at least comment on the possible option.

@innat, it will be addressed with #227 .

And I think it would be nice if we can show models layers activation maps. For example: in patch core, the backbone is wide-resnet with layers 2 and 3. How about showcasing their activation maps.

We could discuss this. Perhaps this could be optional, so the user would be able to access when needed.

@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

We could discuss this. Perhaps this could be optional, so the user would be able to access when needed.

Yes. It should be optional. It will be helpful for debugging purposes.

@innat
Copy link
Contributor Author

innat commented Apr 20, 2022

@ashwinvaidya17

For the second point, unless I am missing something you should ideally be able to add anything to the tensorboard logger using the logger object.

Sorry, I've limited experience in pytorch. In keras, we can use keras.callbacks.TensorBoard this convenient callback handles most of the cases, like plot graphs, histograms, images, etc. Could u please share any references where I can find examples regarding tensorboard logging in pytorch-lighting? I found this, but a bit confusing to me.

@djdameln djdameln added Enhancement New feature or request Logger labels Apr 22, 2022
@djdameln djdameln added this to the v0.2.8 milestone Apr 22, 2022
@ashwinvaidya17
Copy link
Collaborator

ashwinvaidya17 commented Apr 25, 2022

Edit: I have moved the for loop after trainer.fit so that it works with Patchcore
@innat I am working on updating the documentation but for now you can use the following snippet in train.py to log model graph.

for logger in loggers:
    if isinstance(logger, AnomalibWandbLogger):
        # NOTE: log graph gets populated only after one backward pass. This won't work for models which do not
        # require training such as Padim
        logger.watch(model, log_graph=True, log="all")
    elif isinstance(logger, AnomalibTensorBoardLogger):
        logger._log_graph = True
        logger.log_graph(model, input_array=torch.ones((1, 3, 256,256)))

So your train method should look like this

def train():
    """Train an anomaly classification or segmentation model based on a provided configuration file."""
    args = get_args()
    config = get_configurable_parameters(model_name=args.model, config_path=args.config)

    if config.project.seed != 0:
        seed_everything(config.project.seed)

    datamodule = get_datamodule(config)
    model = get_model(config)
    loggers = get_logger(config)

    callbacks = get_callbacks(config)

    trainer = Trainer(**config.trainer, logger=loggers, callbacks=callbacks)
    trainer.fit(model=model, datamodule=datamodule)

    for logger in loggers:
        if isinstance(logger, AnomalibWandbLogger):
            # NOTE: log graph gets populated only after one backward pass. This won't work for models which do not
            # require training such as Padim
            logger.watch(model, log_graph=True, log="all")
        elif isinstance(logger, AnomalibTensorBoardLogger):
            logger._log_graph = True
            logger.log_graph(model, input_array=torch.ones((1, 3, 256,256)))

    # load best model from checkpoint before evaluating
    load_model_callback = LoadModelCallback(weights_path=trainer.checkpoint_callback.best_model_path)
    trainer.callbacks.insert(0, load_model_callback)

    trainer.test(model=model, datamodule=datamodule)

And here is the output
tensorboard_graph

Not sure if this is what you wanted. Also, to access other methods provided by tensorboard you can access logger.experiment object. But I'd be more inclined towards using this https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.loggers.tensorboard.html rather than accessing experiment object. I'll try to make this more clear in the PR.

@innat
Copy link
Contributor Author

innat commented Apr 25, 2022

@ashwinvaidya17 Thanks a lot, it's really helpful. I'm currently out of my workplace, will check soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request Logger
Projects
None yet
Development

No branches or pull requests

4 participants