Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inferener arg names and weight path issue. #422

Merged
merged 5 commits into from
Jul 11, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 10 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,26 +156,22 @@ The new CLI approach offers a lot more flexibility, details of which are explain

## Inference
### ⚠️ Anomalib < v.0.4.0
Anomalib contains several tools that can be used to perform inference with a trained model. The script in [`tools/inference`](tools/inference/lightning.py) contains an example of how the inference tools can be used to generate a prediction for an input image.
Anomalib includes multiple tools, including Lightning, Gradio, and OpenVINO inferencers, for performing inference with a trained model.

If the specified weight path points to a PyTorch Lightning checkpoint file (`.ckpt`), inference will run in PyTorch. If the path points to an ONNX graph (`.onnx`) or OpenVINO IR (`.bin` or `.xml`), inference will run in OpenVINO.

The following command can be used to run inference from the command line:
The following command can be used to run PyTorch Lightning inference from the command line:

```bash
python tools/inference.py \
--config <path/to/model/config.yaml> \
--weight_path <path/to/weight/file> \
--image_path <path/to/image>
python tools/inference/lightning_inference.py -h
```

As a quick example:

```bash
python tools/inference.py \
python tools/inference/lightning_inference.py \
--config anomalib/models/padim/config.yaml \
--weight_path results/padim/mvtec/bottle/weights/model.ckpt \
--image_path datasets/MVTec/bottle/test/broken_large/000.png
--weights results/padim/mvtec/bottle/weights/model.ckpt \
--input datasets/MVTec/bottle/test/broken_large/000.png \
--output results/padim/mvtec/bottle/images
```

If you want to run OpenVINO model, ensure that `openvino` `apply` is set to `True` in the respective model `config.yaml`.
Expand All @@ -191,10 +187,10 @@ Example OpenVINO Inference:
```bash
python tools/inference/openvino_inference.py \
--config anomalib/models/padim/config.yaml \
--weight_path results/padim/mvtec/bottle/openvino/openvino_model.bin \
--image_path datasets/MVTec/bottle/test/broken_large/000.png \
--weights results/padim/mvtec/bottle/openvino/openvino_model.bin \
--meta_data results/padim/mvtec/bottle/openvino/meta_data.json \
--save_path results/padim/mvtec/bottle/images
--input datasets/MVTec/bottle/test/broken_large/000.png \
--output results/padim/mvtec/bottle/images
```

> Ensure that you provide path to `meta_data.json` if you want the normalization to be applied correctly.
Expand Down
2 changes: 1 addition & 1 deletion anomalib/config/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def get_configurable_parameters(
config.trainer.default_root_dir = str(project_path)

if weight_file:
config.model.weight_file = weight_file
config.trainer.resume_from_checkpoint = weight_file

config = update_nncf_config(config)

Expand Down
1 change: 0 additions & 1 deletion anomalib/models/patchcore/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ model:
- layer3
coreset_sampling_ratio: 0.1
num_neighbors: 9
weight_file: weights/model.ckpt
normalization_method: min_max # options: [null, min_max, cdf]

metrics:
Expand Down
4 changes: 2 additions & 2 deletions anomalib/utils/callbacks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@ def get_callbacks(config: Union[ListConfig, DictConfig]) -> List[Callback]:
)
callbacks.append(metrics_callback)

if "weight_file" in config.model.keys():
load_model = LoadModelCallback(os.path.join(config.project.path, config.model.weight_file))
if "resume_from_checkpoint" in config.trainer.keys() and config.trainer.resume_from_checkpoint is not None:
load_model = LoadModelCallback(config.trainer.resume_from_checkpoint)
callbacks.append(load_model)

if "normalization_method" in config.model.keys() and not config.model.normalization_method == "none":
Expand Down
38 changes: 19 additions & 19 deletions docs/source/guides/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,25 +9,25 @@ PyTorch (Lightning) Inference
==============
The entrypoint script in ``tools/inference/lightning.py`` can be used to run inference with a trained PyTorch model. The script runs inference by loading a previously trained model into a PyTorch Lightning trainer and running the ``predict sequence``. The entrypoint script has several command line arguments that can be used to configure inference:

+---------------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=====================+==========+=====================================================================================+
| config | True | Path to the model config file. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| weight_path | True | Path to the ``.ckpt`` model checkpoint file. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| image_path | True | Path to the image source. This can be a single image or a folder of images. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| save_path | False | Path to which the output images should be saved. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| visualization_mode | False | Determines how the inference results are visualized. Options: "full", "simple". |
+---------------------+----------+-------------------------------------------------------------------------------------+
| disable_show_images | False | When this flag is passed, visualizations will not be shown on the screen. |
+---------------------+----------+-------------------------------------------------------------------------------------+
+---------------------+----------+---------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=====================+==========+=================================================================================+
| config | True | Path to the model config file. |
+---------------------+----------+---------------------------------------------------------------------------------+
| weights | True | Path to the ``.ckpt`` model checkpoint file. |
+---------------------+----------+---------------------------------------------------------------------------------+
| input | True | Path to the image source. This can be a single image or a folder of images. |
+---------------------+----------+---------------------------------------------------------------------------------+
| output | False | Path to which the output images should be saved. |
+---------------------+----------+---------------------------------------------------------------------------------+
| visualization_mode | False | Determines how the inference results are visualized. Options: "full", "simple". |
+---------------------+----------+---------------------------------------------------------------------------------+
| disable_show_images | False | When this flag is passed, visualizations will not be shown on the screen. |
+---------------------+----------+---------------------------------------------------------------------------------+

To run inference, call the script from the command line with the with the following parameters, e.g.:

``python tools/inference/lightning.py --config padim.yaml --weight_path results/weights/model.ckpt --image_path image.png``
``python tools/inference/lightning.py --config padim.yaml --weights results/weights/model.ckpt --input image.png``

This will run inference on the specified image file or all images in the folder. A visualization of the inference results including the predicted heatmap and segmentation results (if applicable), will be displayed on the screen, like the example below.

Expand All @@ -42,9 +42,9 @@ To run OpenVINO inference, first make sure that your model has been exported to
+=============+==========+=====================================================================================+
| config | True | Path to the model config file. |
+-------------+----------+-------------------------------------------------------------------------------------+
| weight_path | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
| weights | True | Path to the OpenVINO IR model file (either ``.xml`` or ``.bin``) |
+-------------+----------+-------------------------------------------------------------------------------------+
| image_path | True | Path to the image source. This can be a single image or a folder of images. |
| image | True | Path to the image source. This can be a single image or a folder of images. |
+-------------+----------+-------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization.|
+-------------+----------+-------------------------------------------------------------------------------------+
Expand All @@ -56,6 +56,6 @@ For correct inference results, the ``meta_data`` argument should be specified an

As an example, OpenVINO inference can be triggered by the following command:

``python tools/inference/openvino.py --config padim.yaml --weight_path results/openvino/model.xml --image_path image.png --meta_data results/openvino/meta_data.json``
``python tools/inference/openvino.py --config padim.yaml --weights results/openvino/model.xml --input image.png --meta_data results/openvino/meta_data.json``

Similar to PyTorch inference, the visualization results will be displayed on the screen, and optionally saved to the file system location specified by the ``save_data`` parameter.
9 changes: 3 additions & 6 deletions tests/helpers/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,6 @@ def setup_model_train(
if legacy_device in config.trainer:
config.trainer[legacy_device] = None

# If weight file is empty, remove the key from config
if "weight_file" in config.model.keys() and weight_file == "":
config.model.pop("weight_file")
else:
config.model.weight_file = weight_file if not fast_run else "weights/last.ckpt"

if nncf:
config.optimization["nncf"] = {"apply": True, "input_info": {"sample_size": None}}
config = update_nncf_config(config)
Expand Down Expand Up @@ -132,6 +126,9 @@ def model_load_test(config: Union[DictConfig, ListConfig], datamodule: Lightning

"""
loaded_model = get_model(config) # get new model
# Assing the weight file to resume_from_checkpoint. When trainer is initialized, Trainer
# object will automatically load the weights.
config.trainer.resume_from_checkpoint = os.path.join(config.project.path, "weights/last.ckpt")

callbacks = get_callbacks(config)

Expand Down
36 changes: 15 additions & 21 deletions tools/inference/gradio_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,37 +43,32 @@ def infer(


def get_args() -> Namespace:
"""Get command line arguments.
r"""Get command line arguments.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is rstring supported in the new python?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linters fails otherwise

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linters doesn't work otherwise


Example:

>>> python tools/inference_gradio.py \
--config_path ./anomalib/models/padim/config.yaml \
--weight_path ./results/padim/mvtec/bottle/weights/model.ckpt
Example for Torch Inference.
>>> python tools/inference/gradio_inference.py \ ─╯
... --config ./anomalib/models/padim/config.yaml \
... --weights ./results/padim/mvtec/bottle/weights/model.ckpt # noqa: E501 #pylint: disable=line-too-long

Returns:
Namespace: List of arguments.
"""
parser = ArgumentParser()
parser.add_argument("--config_path", type=Path, required=True, help="Path to a model config file")
parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights")
parser.add_argument(
"--meta_data_path", type=Path, required=False, help="Path to JSON file containing the metadata."
)

parser.add_argument("--config", type=Path, required=True, help="Path to a config file")
parser.add_argument("--weights", type=Path, required=True, help="Path to model weights")
parser.add_argument("--meta_data", type=Path, required=False, help="Path to a JSON file containing the metadata.")
parser.add_argument(
"--threshold",
type=float,
required=False,
default=75.0,
help="Value to threshold anomaly scores into 0-100 range",
)

parser.add_argument("--share", type=bool, required=False, default=False, help="Share Gradio `share_url`")

args = parser.parse_args()

return args
return parser.parse_args()


def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optional[Path] = None) -> Inferencer:
Expand All @@ -96,15 +91,14 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona
# for the openvino models.
extension = weight_path.suffix
inferencer: Inferencer
module = import_module("anomalib.deploy")
if extension in (".ckpt"):
module = import_module("anomalib.deploy.inferencers.torch")
torch_inferencer = getattr(module, "TorchInferencer")
inferencer = torch_inferencer(config=config, model_source=weight_path, meta_data_path=meta_data_path)

elif extension in (".onnx", ".bin", ".xml"):
module = import_module("anomalib.deploy.inferencers.openvino")
openvino_inferencer = getattr(module, "OpenVINOInferencer")
inferencer = openvino_inferencer(config=config, path=weight_path, meta_data_path=meta_data_path)
inferencer = openvino_inferencer(config=config_path, path=weight_path, meta_data_path=meta_data_path)

else:
raise ValueError(
Expand All @@ -116,17 +110,17 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona


if __name__ == "__main__":
session_args = get_args()
args = get_args()

gradio_inferencer = get_inferencer(session_args.config_path, session_args.weight_path, session_args.meta_data_path)
gradio_inferencer = get_inferencer(args.config, args.weights, args.meta_data)

interface = gr.Interface(
fn=lambda image, threshold: infer(image, gradio_inferencer, threshold),
inputs=[
gradio.inputs.Image(
shape=None, image_mode="RGB", source="upload", tool="editor", type="numpy", label="Image"
),
gradio.inputs.Slider(default=session_args.threshold, label="threshold", optional=False),
gradio.inputs.Slider(default=args.threshold, label="threshold", optional=False),
],
outputs=[
gradio.outputs.Image(type="numpy", label="Anomaly Map"),
Expand All @@ -139,4 +133,4 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona
description="Anomalib Gradio",
)

interface.launch(share=session_args.share)
interface.launch(share=args.share)
24 changes: 12 additions & 12 deletions tools/inference/lightning_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,24 +22,24 @@ def get_args() -> Namespace:
Namespace: List of arguments.
"""
parser = ArgumentParser()
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file")
parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights")
parser.add_argument("--image_path", type=Path, required=True, help="Path to an image to infer.")
parser.add_argument("--config", type=Path, required=True, help="Path to a config file")
parser.add_argument("--weights", type=Path, required=True, help="Path to model weights")
parser.add_argument("--input", type=Path, required=True, help="Path to image(s) to infer.")
parser.add_argument("--output", type=str, required=False, help="Path to save the output image(s).")
parser.add_argument(
"--visualization_mode",
type=str,
required=False,
default="simple",
help="Visualization mode. 'full' or 'simple'",
help="Visualization mode.",
choices=["full", "simple"],
)
parser.add_argument(
"--disable_show_images",
"--show",
action="store_true",
required=False,
help="Do not show the visualized predictions on the screen.",
help="Show the visualized predictions on the screen.",
)
parser.add_argument("--save_path", type=str, required=False, help="Path to save the output images.")

args = parser.parse_args()
return args
Expand All @@ -49,12 +49,12 @@ def infer():
"""Run inference."""
args = get_args()
config = get_configurable_parameters(config_path=args.config)
config.model["weight_file"] = str(args.weight_path)
config.visualization.show_images = not args.disable_show_images
config.trainer.resume_from_checkpoint = str(args.weights)
config.visualization.show_images = args.show
config.visualization.mode = args.visualization_mode
if args.save_path: # overwrite save path
if args.output: # overwrite save path
config.visualization.save_images = True
config.visualization.image_save_path = args.save_path
config.visualization.image_save_path = args.output
else:
config.visualization.save_images = False

Expand All @@ -63,7 +63,7 @@ def infer():

trainer = Trainer(callbacks=callbacks, **config.trainer)

dataset = InferenceDataset(args.image_path, image_size=tuple(config.dataset.image_size))
dataset = InferenceDataset(args.input, image_size=tuple(config.dataset.image_size))
dataloader = DataLoader(dataset)
trainer.predict(model=model, dataloaders=[dataloader])

Expand Down
Loading