Skip to content

Commit

Permalink
2023-08-18 nightly release (87d54c4)
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Aug 18, 2023
1 parent 463ac8d commit 76ad862
Show file tree
Hide file tree
Showing 56 changed files with 1,012 additions and 805 deletions.
1 change: 1 addition & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -118,5 +118,6 @@ jobs:
git config user.name 'pytorchbot'
git config user.email 'soumith+bot@pytorch.org'
git config http.postBuffer 524288000
git commit -m "auto-generating sphinx docs" || true
git push
45 changes: 32 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,30 +30,49 @@ clear and has sufficient instructions to be able to reproduce the issue.

## Development installation

### Install PyTorch Nightly

### Dependencies

Start by installing the **nightly** build of PyTorch following the [official
instructions](https://pytorch.org/get-started/locally/).

**Optionally**, install `libpng` and `libjpeg-turbo` if you want to enable
support for
native encoding / decoding of PNG and JPEG formats in
[torchvision.io](https://pytorch.org/vision/stable/io.html#image):

```bash
conda install pytorch -c pytorch-nightly
# or with pip (see https://pytorch.org/get-started/locally/)
# pip install numpy
# pip install --pre torch -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
conda install libpng libjpeg-turbo -c pytorch
```

### Install Torchvision
Note: you can use the `TORCHVISION_INCLUDE` and `TORCHVISION_LIBRARY`
environment variables to tell the build system where to find those libraries if
they are in specific locations. Take a look at
[setup.py](https://github.com/pytorch/vision/blob/main/setup.py) for more
details.

### Clone and install torchvision

```bash
git clone https://github.com/pytorch/vision.git
cd vision
python setup.py develop
python setup.py develop # use install instead of develop if you don't care about development.
# or, for OSX
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py develop
# for C++ debugging, please use DEBUG=1
# for C++ debugging, use DEBUG=1
# DEBUG=1 python setup.py develop
pip install flake8 typing mypy pytest pytest-mock scipy
```
You may also have to install `libpng-dev` and `libjpeg-turbo8-dev` libraries:
```bash
conda install libpng jpeg

By default, GPU support is built if CUDA is found and `torch.cuda.is_available()` is true. It's possible to force
building GPU support by setting `FORCE_CUDA=1` environment variable, which is useful when building a docker image.

We don't officially support building from source using `pip`, but _if_ you do, you'll need to use the
`--no-build-isolation` flag.

Other development dependencies include:

```
pip install flake8 typing mypy pytest pytest-mock scipy
```

## Development Process
Expand Down Expand Up @@ -192,7 +211,7 @@ Please refer to the guidelines in [Contributing to Torchvision - Models](https:/

### New dataset

More details on how to add a new dataset will be provided later. Please, do not send any PR with a new dataset without discussing
Please, do not send any PR with a new dataset without discussing
it in an issue as, most likely, it will not be accepted.

### Pull Request
Expand Down
62 changes: 16 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,14 @@ vision.

## Installation

We recommend Anaconda as Python package management system. Please refer to [pytorch.org](https://pytorch.org/) for the
detail of PyTorch (`torch`) installation. The following is the corresponding `torchvision` versions and supported Python
Please refer to the [official
instructions](https://pytorch.org/get-started/locally/) to install the stable
versions of `torch` and `torchvision` on your system.

To build source, refer to our [contributing
page](https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation).

The following is the corresponding `torchvision` versions and supported Python
versions.

| `torch` | `torchvision` | Python |
Expand Down Expand Up @@ -39,54 +45,18 @@ versions.

</details>

Anaconda:

```
conda install torchvision -c pytorch
```

pip:

```
pip install torchvision
```
## Image Backends

From source:

```
python setup.py install
# or, for OSX
# MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
```

We don't officially support building from source using `pip`, but _if_ you do, you'll need to use the
`--no-build-isolation` flag. In case building TorchVision from source fails, install the nightly version of PyTorch
following the linked guide on the
[contributing page](https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation) and retry the
install.

By default, GPU support is built if CUDA is found and `torch.cuda.is_available()` is true. It's possible to force
building GPU support by setting `FORCE_CUDA=1` environment variable, which is useful when building a docker image.
Torchvision currently supports the following image backends:

## Image Backend
- torch tensors
- PIL images:
- [Pillow](https://python-pillow.org/)
- [Pillow-SIMD](https://github.com/uploadcare/pillow-simd) - a **much faster** drop-in replacement for Pillow with SIMD.

Torchvision currently supports the following image backends:
Read more in in our [docs](https://pytorch.org/vision/stable/transforms.html).

- [Pillow](https://python-pillow.org/) (default)
- [Pillow-SIMD](https://github.com/uploadcare/pillow-simd) - a **much faster** drop-in replacement for Pillow with SIMD.
If installed will be used as the default.
- [accimage](https://github.com/pytorch/accimage) - if installed can be activated by calling
`torchvision.set_image_backend('accimage')`
- [libpng](http://www.libpng.org/pub/png/libpng.html) - can be installed via conda `conda install libpng` or any of the
package managers for debian-based and RHEL-based Linux distributions.
- [libjpeg](http://ijg.org/) - can be installed via conda `conda install jpeg` or any of the package managers for
debian-based and RHEL-based Linux distributions. [libjpeg-turbo](https://libjpeg-turbo.org/) can be used as well.

**Notes:** `libpng` and `libjpeg` must be available at compilation time in order to be available. Make sure that it is
available on the standard library locations, otherwise, add the include and library paths in the environment variables
`TORCHVISION_INCLUDE` and `TORCHVISION_LIBRARY`, respectively.

## Video Backend
## [UNSTABLE] Video Backend

Torchvision currently supports the following video backends:

Expand Down
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
import pytorch_sphinx_theme
import torchvision
import torchvision.models as M
from sphinx_gallery.sorting import ExplicitOrder
from tabulate import tabulate

sys.path.append(os.path.abspath("."))
Expand Down Expand Up @@ -61,6 +62,7 @@
sphinx_gallery_conf = {
"examples_dirs": "../../gallery/", # path to your example scripts
"gallery_dirs": "auto_examples", # path to where to save gallery generated output
"subsection_order": ExplicitOrder(["../../gallery/v2_transforms", "../../gallery/others"]),
"backreferences_dir": "gen_modules/backreferences",
"doc_module": ("torchvision",),
"remove_config_comments": True,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/datapoints.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Datapoints
Datapoints are tensor subclasses which the :mod:`~torchvision.transforms.v2` v2 transforms use under the hood to
dispatch their inputs to the appropriate lower-level kernels. Most users do not
need to manipulate datapoints directly and can simply rely on dataset wrapping -
see e.g. :ref:`sphx_glr_auto_examples_plot_transforms_v2_e2e.py`.
see e.g. :ref:`sphx_glr_auto_examples_v2_transforms_plot_transforms_v2_e2e.py`.

.. autosummary::
:toctree: generated/
Expand Down
61 changes: 31 additions & 30 deletions docs/source/io.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,37 @@
Reading/Writing images and videos
=================================
Decoding / Encoding images and videos
=====================================

.. currentmodule:: torchvision.io

The :mod:`torchvision.io` package provides functions for performing IO
operations. They are currently specific to reading and writing video and
images.
operations. They are currently specific to reading and writing images and
videos.

Images
------

.. autosummary::
:toctree: generated/
:template: function.rst

read_image
decode_image
encode_jpeg
decode_jpeg
write_jpeg
encode_png
decode_png
write_png
read_file
write_file

.. autosummary::
:toctree: generated/
:template: class.rst

ImageReadMode



Video
-----
Expand All @@ -20,7 +46,7 @@ Video


Fine-grained video API
----------------------
^^^^^^^^^^^^^^^^^^^^^^

In addition to the :mod:`read_video` function, we provide a high-performance
lower-level API for more fine-grained control compared to the :mod:`read_video` function.
Expand Down Expand Up @@ -61,28 +87,3 @@ Example of inspecting a video:
# the constructor we select a default video stream, but
# in practice, we can set whichever stream we would like
video.set_current_stream("video:0")
Image
-----

.. autosummary::
:toctree: generated/
:template: class.rst

ImageReadMode

.. autosummary::
:toctree: generated/
:template: function.rst

read_image
decode_image
encode_jpeg
decode_jpeg
write_jpeg
encode_png
decode_png
write_png
read_file
write_file
13 changes: 7 additions & 6 deletions docs/source/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Transforming and augmenting images
are fully backward compatible with the current ones, and you'll see them
documented below with a `v2.` prefix. To get started with those new
transforms, you can check out
:ref:`sphx_glr_auto_examples_plot_transforms_v2_e2e.py`.
:ref:`sphx_glr_auto_examples_v2_transforms_plot_transforms_v2_e2e.py`.
Note that these transforms are still BETA, and while we don't expect major
breaking changes in the future, some APIs may still change according to user
feedback. Please submit any feedback you may have `here
Expand Down Expand Up @@ -54,15 +54,15 @@ across calls. For reproducible transformations across calls, you may use

The following examples illustrate the use of the available transforms:

* :ref:`sphx_glr_auto_examples_plot_transforms.py`
* :ref:`sphx_glr_auto_examples_others_plot_transforms.py`

.. figure:: ../source/auto_examples/images/sphx_glr_plot_transforms_001.png
.. figure:: ../source/auto_examples/others/images/sphx_glr_plot_transforms_001.png
:align: center
:scale: 65%

* :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py`
* :ref:`sphx_glr_auto_examples_others_plot_scripted_tensor_transforms.py`

.. figure:: ../source/auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_001.png
.. figure:: ../source/auto_examples/others/images/sphx_glr_plot_scripted_tensor_transforms_001.png
:align: center
:scale: 30%

Expand Down Expand Up @@ -237,6 +237,7 @@ Conversion
v2.ConvertImageDtype
v2.ToDtype
v2.ConvertBoundingBoxFormat
v2.ToPureTensor

Auto-Augmentation
-----------------
Expand Down Expand Up @@ -268,7 +269,7 @@ CutMix and MixUp are special transforms that
are meant to be used on batches rather than on individual images, because they
are combining pairs of images together. These can be used after the dataloader
(once the samples are batched), or part of a collation function. See
:ref:`sphx_glr_auto_examples_plot_cutmix_mixup.py` for detailed usage examples.
:ref:`sphx_glr_auto_examples_v2_transforms_plot_cutmix_mixup.py` for detailed usage examples.

.. autosummary::
:toctree: generated/
Expand Down
2 changes: 1 addition & 1 deletion docs/source/utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Utils
=====

The ``torchvision.utils`` module contains various utilities, mostly :ref:`for
visualization <sphx_glr_auto_examples_plot_visualization_utils.py>`.
visualization <sphx_glr_auto_examples_others_plot_visualization_utils.py>`.

.. currentmodule:: torchvision.utils

Expand Down
6 changes: 2 additions & 4 deletions gallery/README.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,2 @@
Example gallery
===============

Below is a gallery of examples
Examples and tutorials
======================
2 changes: 2 additions & 0 deletions gallery/others/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Others
------
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
import torchvision.transforms.functional as F


ASSETS_DIRECTORY = "assets"
ASSETS_DIRECTORY = "../assets"

plt.rcParams["savefig.bbox"] = "tight"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,16 +49,16 @@ def show(imgs):
# The :func:`~torchvision.io.read_image` function allows to read an image and
# directly load it as a tensor

dog1 = read_image(str(Path('assets') / 'dog1.jpg'))
dog2 = read_image(str(Path('assets') / 'dog2.jpg'))
dog1 = read_image(str(Path('../assets') / 'dog1.jpg'))
dog2 = read_image(str(Path('../assets') / 'dog2.jpg'))
show([dog1, dog2])

# %%
# Transforming images on GPU
# --------------------------
# Most transforms natively support tensors on top of PIL images (to visualize
# the effect of the transforms, you may refer to see
# :ref:`sphx_glr_auto_examples_plot_transforms.py`).
# :ref:`sphx_glr_auto_examples_others_plot_transforms.py`).
# Using tensor images, we can run the transforms on GPUs if cuda is available!

import torch.nn as nn
Expand Down Expand Up @@ -121,7 +121,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:

import json

with open(Path('assets') / 'imagenet_class_index.json') as labels_file:
with open(Path('../assets') / 'imagenet_class_index.json') as labels_file:
labels = json.load(labels_file)

for i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@


plt.rcParams["savefig.bbox"] = 'tight'
orig_img = Image.open(Path('assets') / 'astronaut.jpg')
orig_img = Image.open(Path('../assets') / 'astronaut.jpg')
# if you change the seed, make sure that the randomly-applied transforms
# properly show that the image can be both transformed and *not* transformed!
torch.manual_seed(0)
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ def show(imgs):
from torchvision.io import read_image
from pathlib import Path

dog1_int = read_image(str(Path('assets') / 'dog1.jpg'))
dog2_int = read_image(str(Path('assets') / 'dog2.jpg'))
dog1_int = read_image(str(Path('../assets') / 'dog1.jpg'))
dog2_int = read_image(str(Path('../assets') / 'dog2.jpg'))
dog_list = [dog1_int, dog2_int]

grid = make_grid(dog_list)
Expand Down Expand Up @@ -360,7 +360,7 @@ def show(imgs):
from torchvision.models.detection import keypointrcnn_resnet50_fpn, KeypointRCNN_ResNet50_FPN_Weights
from torchvision.io import read_image

person_int = read_image(str(Path("assets") / "person1.jpg"))
person_int = read_image(str(Path("../assets") / "person1.jpg"))

weights = KeypointRCNN_ResNet50_FPN_Weights.DEFAULT
transforms = weights.transforms()
Expand Down
2 changes: 2 additions & 0 deletions gallery/v2_transforms/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
V2 transforms
-------------
Loading

0 comments on commit 76ad862

Please sign in to comment.