Skip to content

Commit

Permalink
Add links to collab (#7854)
Browse files Browse the repository at this point in the history
  • Loading branch information
NicolasHug authored Aug 18, 2023
1 parent 90e0b79 commit 59b27ed
Show file tree
Hide file tree
Showing 14 changed files with 74 additions and 3 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ jobs:
sed -i -e 's/-j auto/-j 1/' Makefile
make html
# Below is an imperfect way for us to add "try on collab" links to all of our gallery examples.
# sphinx-gallery will convert all gallery examples to .ipynb notebooks and stores them in
# build/html/_downloads/<some_hash>/<example_name>.ipynb
# We copy all those ipynb files in a more convenient folder so that we can more easily link to them.
mkdir build/html/_generated_ipynb_notebooks
for file in `find build/html/_downloads`; do
if [[ $file == *.ipynb ]]; then
Expand Down
20 changes: 20 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,26 @@
"beta_status",
]

# We override sphinx-gallery's example header to prevent sphinx-gallery from
# creating a note at the top of the renderred notebook.
# https://github.com/sphinx-gallery/sphinx-gallery/blob/451ccba1007cc523f39cbcc960ebc21ca39f7b75/sphinx_gallery/gen_rst.py#L1267-L1271
# This is because we also want to add a link to google collab, so we write our own note in each example.
from sphinx_gallery import gen_rst

gen_rst.EXAMPLE_HEADER = """
.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "{0}"
.. LINE NUMBERS ARE GIVEN BELOW.
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_{1}:
"""


sphinx_gallery_conf = {
"examples_dirs": "../../gallery/", # path to your example scripts
"gallery_dirs": "auto_examples", # path to where to save gallery generated output
Expand Down
4 changes: 4 additions & 0 deletions gallery/others/plot_optical_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Optical Flow: Predicting movement with the RAFT model
=====================================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_optical_flow.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_optical_flow.py>` to download the full example code.
Optical flow is the task of predicting movement between two images, usually two
consecutive frames of a video. Optical flow models take two images as input, and
predict a flow: the flow indicates the displacement of every single pixel in the
Expand Down
4 changes: 4 additions & 0 deletions gallery/others/plot_repurposing_annotations.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Repurposing masks into bounding boxes
=====================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_repurposing_annotations.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_repurposing_annotations.py>` to download the full example code.
The following example illustrates the operations available
the :ref:`torchvision.ops <ops>` module for repurposing
segmentation masks into object localization annotations for different tasks
Expand Down
4 changes: 4 additions & 0 deletions gallery/others/plot_scripted_tensor_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Tensor transforms and JIT
=========================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_scripted_tensor_transforms.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_scripted_tensor_transforms.py>` to download the full example code.
This example illustrates various features that are now supported by the
:ref:`image transformations <transforms>` on Tensor images. In particular, we
show how image transforms can be performed on GPU, and how one can also script
Expand Down
4 changes: 4 additions & 0 deletions gallery/others/plot_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Illustration of transforms
==========================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_transforms.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_transforms.py>` to download the full example code.
This example illustrates the various transforms available in :ref:`the
torchvision.transforms module <transforms>`.
"""
Expand Down
8 changes: 6 additions & 2 deletions gallery/others/plot_video_api.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
"""
=======================
=========
Video API
=======================
=========
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_video_api.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_video_api.py>` to download the full example code.
This example illustrates some of the APIs that torchvision offers for
videos, together with the examples on how to build datasets and more.
Expand Down
4 changes: 4 additions & 0 deletions gallery/others/plot_visualization_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Visualization utilities
=======================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_visualization_utils.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_others_plot_visualization_utils.py>` to download the full example code.
This example illustrates some of the utilities that torchvision offers for
visualizing images, bounding boxes, segmentation masks and keypoints.
"""
Expand Down
4 changes: 4 additions & 0 deletions gallery/v2_transforms/plot_custom_datapoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
How to write your own Datapoint class
=====================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_custom_datapoints.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_custom_datapoints.py>` to download the full example code.
This guide is intended for advanced users and downstream library maintainers. We explain how to
write your own datapoint class, and how to make it compatible with the built-in
Torchvision v2 transforms. Before continuing, make sure you have read
Expand Down
4 changes: 4 additions & 0 deletions gallery/v2_transforms/plot_custom_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
How to write your own v2 transforms
===================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_custom_transforms.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_custom_transforms.py>` to download the full example code.
This guide explains how to write transforms that are compatible with the
torchvision transforms V2 API.
"""
Expand Down
4 changes: 4 additions & 0 deletions gallery/v2_transforms/plot_cutmix_mixup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@
How to use CutMix and MixUp
===========================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_cutmix_mixup.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_cutmix_mixup.py>` to download the full example code.
:class:`~torchvision.transforms.v2.CutMix` and
:class:`~torchvision.transforms.v2.MixUp` are popular augmentation strategies
that can improve classification accuracy.
Expand Down
5 changes: 4 additions & 1 deletion gallery/v2_transforms/plot_datapoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,10 @@
Datapoints FAQ
==============
https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/_generated_ipynb_notebooks/plot_datapoints.ipynb
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_datapoints.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_datapoints.py>` to download the full example code.
Datapoints are Tensor subclasses introduced together with
``torchvision.transforms.v2``. This example showcases what these datapoints are
Expand Down
4 changes: 4 additions & 0 deletions gallery/v2_transforms/plot_transforms_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Getting started with transforms v2
==================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_transforms_v2.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_transforms_v2.py>` to download the full example code.
Most computer vision tasks are not supported out of the box by ``torchvision.transforms`` v1, since it only supports
images. ``torchvision.transforms.v2`` enables jointly transforming images, videos, bounding boxes, and masks. This
example showcases the core functionality of the new ``torchvision.transforms.v2`` API.
Expand Down
4 changes: 4 additions & 0 deletions gallery/v2_transforms/plot_transforms_v2_e2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
Transforms v2: End-to-end object detection example
==================================================
.. note::
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_transforms_v2_e2e.ipynb>`_
or :ref:`go to the end <sphx_glr_download_auto_examples_v2_transforms_plot_transforms_v2_e2e.py>` to download the full example code.
Object detection is not supported out of the box by ``torchvision.transforms`` v1, since it only supports images.
``torchvision.transforms.v2`` enables jointly transforming images, videos, bounding boxes, and masks. This example
showcases an end-to-end object detection training using the stable ``torchvision.datasets`` and ``torchvision.models``
Expand Down

0 comments on commit 59b27ed

Please sign in to comment.