Skip to content

Commit

Permalink
Merge branch 'main' into trivialaugment_implementation
Browse files Browse the repository at this point in the history
  • Loading branch information
datumbox authored Aug 31, 2021
2 parents 425c52d + 3a7e5e3 commit fa8a6d5
Show file tree
Hide file tree
Showing 20 changed files with 377 additions and 230 deletions.
52 changes: 0 additions & 52 deletions .github/ISSUE_TEMPLATE/bug-report.md

This file was deleted.

60 changes: 60 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: 🐛 Bug Report
description: Create a report to help us reproduce and fix the bug

body:
- type: markdown
attributes:
value: >
#### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/pytorch/vision/issues?q=is%3Aissue+sort%3Acreated-desc+).
- type: textarea
attributes:
label: 🐛 Describe the bug
description: |
Please provide a clear and concise description of what the bug is.
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for he snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:
```python
# All necessary imports at the beginning
import torch
import torchvision
from torchvision.ops import nms
# A succinct reproducing example trimmed down to the essential parts:
N = 5
boxes = torch.rand(N, 4) # Note: the bug is here, we should enforce that x1 < x2 and y1 < y2!
scores = torch.rand(N)
nms(boxes, scores, iou_threshold=.9)
```
If the code is too long (hopefully, it isn't), feel free to put it in a public gist and link it in the issue: https://gist.github.com.
Please also paste or describe the results you observe instead of the expected results. If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.
placeholder: |
A clear and concise description of what the bug is.
```python
Sample code to reproduce the problem
```
```
The error message you got, with the full traceback.
````
validations:
required: true
- type: textarea
attributes:
label: Versions
description: |
Please run the following and paste the output below.
```sh
wget https://github.com/raw/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
validations:
required: true
- type: markdown
attributes:
value: >
Thanks for contributing 🎉!
5 changes: 5 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
blank_issues_enabled: true
contact_links:
- name: Usage questions
url: https://discuss.pytorch.org/
about: Ask questions and discuss with other torchvision community members
12 changes: 0 additions & 12 deletions .github/ISSUE_TEMPLATE/documentation.md

This file was deleted.

20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: 📚 Documentation
description: Report an issue related to https://pytorch.org/vision/stable/index.html

body:
- type: textarea
attributes:
label: 📚 The doc issue
description: >
A clear and concise description of what content in https://pytorch.org/vision/stable/index.html is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new.
validations:
required: true
- type: textarea
attributes:
label: Suggest a potential alternative/fix
description: >
Tell us how we could improve the documentation in this regard.
- type: markdown
attributes:
value: >
Thanks for contributing 🎉!
27 changes: 0 additions & 27 deletions .github/ISSUE_TEMPLATE/feature-request.md

This file was deleted.

32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: 🚀 Feature request
description: Submit a proposal/request for a new torchvision feature

body:
- type: textarea
attributes:
label: 🚀 The feature
description: >
A clear and concise description of the feature proposal
validations:
required: true
- type: textarea
attributes:
label: Motivation, pitch
description: >
Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too.
validations:
required: true
- type: textarea
attributes:
label: Alternatives
description: >
A description of any alternative solutions or features you've considered, if any.
- type: textarea
attributes:
label: Additional context
description: >
Add any other context or screenshots about the feature request.
- type: markdown
attributes:
value: >
Thanks for contributing 🎉!
16 changes: 0 additions & 16 deletions .github/ISSUE_TEMPLATE/questions-help-support.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/source/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ number of channels, ``H`` and ``W`` are image height and width. A batch of
Tensor Images is a tensor of ``(B, C, H, W)`` shape, where ``B`` is a number
of images in the batch.

The expected range of the values of a tensor image is implicitely defined by
The expected range of the values of a tensor image is implicitly defined by
the tensor dtype. Tensor images with a float dtype are expected to have
values in ``[0, 1)``. Tensor images with an integer dtype are expected to
have values in ``[0, MAX_DTYPE]`` where ``MAX_DTYPE`` is the largest value
Expand All @@ -35,7 +35,7 @@ images of a given batch, but they will produce different transformations
across calls. For reproducible transformations across calls, you may use
:ref:`functional transforms <functional_transforms>`.

The following examples illustate the use of the available transforms:
The following examples illustrate the use of the available transforms:

* :ref:`sphx_glr_auto_examples_plot_transforms.py`

Expand Down
4 changes: 0 additions & 4 deletions mypy.ini
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,6 @@ ignore_errors=True

ignore_errors = True

[mypy-torchvision.models.quantization.*]

ignore_errors = True

[mypy-torchvision.ops.*]

ignore_errors = True
Expand Down
7 changes: 5 additions & 2 deletions torchvision/datasets/ucf101.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ class UCF101(VisionDataset):
UCF101 is an action recognition video dataset.
This dataset consider every video as a collection of video clips of fixed size, specified
by ``frames_per_clip``, where the step in frames between each clip is given by
``step_between_clips``.
``step_between_clips``. The dataset itself can be downloaded from the dataset website;
annotations that ``annotation_path`` should be pointing to can be downloaded from `here
<https://www.crcv.ucf.edu/data/UCF101/UCF101TrainTestSplits-RecognitionTask.zip>`.
To give an example, for 2 videos with 10 and 15 frames respectively, if ``frames_per_clip=5``
and ``step_between_clips=5``, the dataset size will be (2 + 3) = 5, where the first two
Expand All @@ -26,7 +28,8 @@ class UCF101(VisionDataset):
Args:
root (string): Root directory of the UCF101 Dataset.
annotation_path (str): path to the folder containing the split files
annotation_path (str): path to the folder containing the split files;
see docstring above for download instructions of these files
frames_per_clip (int): number of frames in a clip.
step_between_clips (int, optional): number of frames between each clip.
fold (int, optional): which fold to use. Should be between 1 and 3.
Expand Down
17 changes: 9 additions & 8 deletions torchvision/io/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import torch
from typing import Any, Dict, Iterator

from ._video_opt import (
Timebase,
Expand Down Expand Up @@ -33,13 +34,13 @@

if _HAS_VIDEO_OPT:

def _has_video_opt():
def _has_video_opt() -> bool:
return True


else:

def _has_video_opt():
def _has_video_opt() -> bool:
return False


Expand Down Expand Up @@ -99,7 +100,7 @@ class VideoReader:
Currently available options include ``['video', 'audio']``
"""

def __init__(self, path, stream="video"):
def __init__(self, path: str, stream: str = "video") -> None:
if not _has_video_opt():
raise RuntimeError(
"Not compiled with video_reader support, "
Expand All @@ -109,7 +110,7 @@ def __init__(self, path, stream="video"):
)
self._c = torch.classes.torchvision.Video(path, stream)

def __next__(self):
def __next__(self) -> Dict[str, Any]:
"""Decodes and returns the next frame of the current stream.
Frames are encoded as a dict with mandatory
data and pts fields, where data is a tensor, and pts is a
Expand All @@ -126,10 +127,10 @@ def __next__(self):
raise StopIteration
return {"data": frame, "pts": pts}

def __iter__(self):
def __iter__(self) -> Iterator['VideoReader']:
return self

def seek(self, time_s: float):
def seek(self, time_s: float) -> 'VideoReader':
"""Seek within current stream.
Args:
Expand All @@ -144,15 +145,15 @@ def seek(self, time_s: float):
self._c.seek(time_s)
return self

def get_metadata(self):
def get_metadata(self) -> Dict[str, Any]:
"""Returns video metadata
Returns:
(dict): dictionary containing duration and frame rate for every stream
"""
return self._c.get_metadata()

def set_current_stream(self, stream: str):
def set_current_stream(self, stream: str) -> bool:
"""Set current stream.
Explicitly define the stream we are operating on.
Expand Down
Loading

0 comments on commit fa8a6d5

Please sign in to comment.