Skip to content

Commit

Permalink
Merge branch 'main' into char-rnn-tutorial-update
Browse files Browse the repository at this point in the history
  • Loading branch information
Svetlana Karslioglu authored Jun 1, 2023
2 parents c5fec24 + d078756 commit 6dbcc40
Show file tree
Hide file tree
Showing 11 changed files with 56 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ Fixes #ISSUE_NUMBER
- [ ] The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
- [ ] Only one issue is addressed in this pull request
- [ ] Labels from the issue that this PR is fixing are added to this pull request
- [ ] No unnessessary issues are included into this pull request.
- [ ] No unnecessary issues are included into this pull request.
3 changes: 3 additions & 0 deletions .github/scripts/docathon-label-sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@ def main():
repo = g.get_repo(f'{repo_owner}/{repo_name}')
pull_request = repo.get_pull(pull_request_number)
pull_request_body = pull_request.body
# PR without description
if pull_request_body is None:
return

# get issue number from the PR body
if not re.search(r'#\d{1,5}', pull_request_body):
Expand Down
10 changes: 10 additions & 0 deletions beginner_source/finetuning_torchvision_models_tutorial.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Finetuning Torchvision Models
=============================

This tutorial has been moved to https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

It will redirect in 3 seconds.

.. raw:: html

<meta http-equiv="Refresh" content="3; url='https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html'" />
5 changes: 4 additions & 1 deletion beginner_source/former_torchies/parallelism_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,10 @@ def forward(self, x):

class MyDataParallel(nn.DataParallel):
def __getattr__(self, name):
return getattr(self.module, name)
try:
return super().__getattr__(name)
except AttributeError:
return getattr(self.module, name)

########################################################################
# **Primitives on which DataParallel is implemented upon:**
Expand Down
7 changes: 7 additions & 0 deletions beginner_source/introyt/tensorboardyt_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,13 @@
# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter

# In case you are using an environment that has TensorFlow installed,
# such as Google Colab, uncomment the following code to avoid
# a bug with saving embeddings to your TensorBoard directory

# import tensorflow as tf
# import tensorboard as tb
# tf.io.gfile = tb.compat.tensorflow_stub.io.gfile

######################################################################
# Showing Images in TensorBoard
Expand Down
3 changes: 1 addition & 2 deletions beginner_source/nn_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -795,8 +795,7 @@ def __len__(self):
return len(self.dl)

def __iter__(self):
batches = iter(self.dl)
for b in batches:
for b in self.dl:
yield (self.func(*b))

train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/transformer_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ def forward(self, x: Tensor) -> Tensor:
# into ``batch_size`` columns. If the data does not divide evenly into
# ``batch_size`` columns, then the data is trimmed to fit. For instance, with
# the alphabet as the data (total length of 26) and ``batch_size=4``, we would
# divide the alphabet into 4 sequences of length 6:
# divide the alphabet into sequences of length 6, resulting in 4 of such sequences.
#
# .. math::
# \begin{bmatrix}
Expand Down
11 changes: 6 additions & 5 deletions intermediate_source/mario_rl_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -711,17 +711,18 @@ def record(self, episode, epsilon, step):
f"{datetime.datetime.now().strftime('%Y-%m-%dT%H:%M:%S'):>20}\n"
)

for metric in ["ep_rewards", "ep_lengths", "ep_avg_losses", "ep_avg_qs"]:
plt.plot(getattr(self, f"moving_avg_{metric}"))
plt.savefig(getattr(self, f"{metric}_plot"))
for metric in ["ep_lengths", "ep_avg_losses", "ep_avg_qs", "ep_rewards"]:
plt.clf()
plt.plot(getattr(self, f"moving_avg_{metric}"), label=f"moving_avg_{metric}")
plt.legend()
plt.savefig(getattr(self, f"{metric}_plot"))


######################################################################
# Let’s play!
# """""""""""""""
#
# In this example we run the training loop for 10 episodes, but for Mario to truly learn the ways of
# In this example we run the training loop for 40 episodes, but for Mario to truly learn the ways of
# his world, we suggest running the loop for at least 40,000 episodes!
#
use_cuda = torch.cuda.is_available()
Expand All @@ -735,7 +736,7 @@ def record(self, episode, epsilon, step):

logger = MetricLogger(save_dir)

episodes = 10
episodes = 40
for e in range(episodes):

state = env.reset()
Expand Down
14 changes: 7 additions & 7 deletions intermediate_source/tensorboard_profiler_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
-----
To install ``torch`` and ``torchvision`` use the following command:
::
.. code-block::
pip install torch torchvision
Expand Down Expand Up @@ -160,23 +160,23 @@ def train(data):
#
# Install PyTorch Profiler TensorBoard Plugin.
#
# ::
# .. code-block::
#
# pip install torch_tb_profiler
#

######################################################################
# Launch the TensorBoard.
#
# ::
# .. code-block::
#
# tensorboard --logdir=./log
#

######################################################################
# Open the TensorBoard profile URL in Google Chrome browser or Microsoft Edge browser.
#
# ::
# .. code-block::
#
# http://localhost:6006/#pytorch_profiler
#
Expand Down Expand Up @@ -287,7 +287,7 @@ def train(data):
# In this example, we follow the "Performance Recommendation" and set ``num_workers`` as below,
# pass a different name such as ``./log/resnet18_4workers`` to ``tensorboard_trace_handler``, and run it again.
#
# ::
# .. code-block::
#
# train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True, num_workers=4)
#
Expand Down Expand Up @@ -316,7 +316,7 @@ def train(data):
#
# You can try it by using existing example on Azure
#
# ::
# .. code-block::
#
# pip install azure-storage-blob
# tensorboard --logdir=https://torchtbprofiler.blob.core.windows.net/torchtbprofiler/demo/memory_demo_1_10
Expand Down Expand Up @@ -366,7 +366,7 @@ def train(data):
#
# You can try it by using existing example on Azure:
#
# ::
# .. code-block::
#
# pip install azure-storage-blob
# tensorboard --logdir=https://torchtbprofiler.blob.core.windows.net/torchtbprofiler/demo/distributed_bert
Expand Down
8 changes: 4 additions & 4 deletions prototype_source/README.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Prototype Tutorials
------------------
1. distributed_rpc_profiling.rst
Profiling PyTorch RPC-Based Workloads
https://github.com/pytorch/tutorials/blob/release/1.6/prototype_source/distributed_rpc_profiling.rst
Profiling PyTorch RPC-Based Workloads
https://github.com/pytorch/tutorials/blob/main/prototype_source/distributed_rpc_profiling.rst

2. graph_mode_static_quantization_tutorial.py
Graph Mode Post Training Static Quantization in PyTorch
Expand All @@ -21,8 +21,8 @@ Prototype Tutorials
https://github.com/pytorch/tutorials/blob/main/prototype_source/torchscript_freezing.py

6. vulkan_workflow.rst
Vulkan Backend User Workflow
https://pytorch.org/tutorials/intermediate/vulkan_workflow.html
Vulkan Backend User Workflow
https://pytorch.org/tutorials/intermediate/vulkan_workflow.html

7. fx_graph_mode_ptq_static.rst
FX Graph Mode Post Training Static Quantization
Expand Down
27 changes: 12 additions & 15 deletions prototype_source/fx_graph_mode_quant_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
**Author**: `Jerry Zhang <https://github.com/jerryzh168>`_

FX Graph Mode Quantization requires a symbolically traceable model.
We use the FX framework (TODO: link) to convert a symbolically traceable nn.Module instance to IR,
We use the FX framework to convert a symbolically traceable nn.Module instance to IR,
and we operate on the IR to execute the quantization passes.
Please post your question about symbolically tracing your model in `PyTorch Discussion Forum <https://discuss.pytorch.org/c/quantization/17>`_

Expand All @@ -22,16 +22,19 @@ You can use any combination of these options:
b. Write your own observed and quantized submodule


####################################################################
If the code that is not symbolically traceable does not need to be quantized, we have the following two options
to run FX Graph Mode Quantization:
1.a. Symbolically trace only the code that needs to be quantized


Symbolically trace only the code that needs to be quantized
-----------------------------------------------------------------
When the whole model is not symbolically traceable but the submodule we want to quantize is
symbolically traceable, we can run quantization only on that submodule.

before:

.. code:: python
class M(nn.Module):
def forward(self, x):
x = non_traceable_code_1(x)
Expand All @@ -42,6 +45,7 @@ before:
after:

.. code:: python
class FP32Traceable(nn.Module):
def forward(self, x):
x = traceable_code(x)
Expand Down Expand Up @@ -69,8 +73,7 @@ Note if original model needs to be preserved, you will have to
copy it yourself before calling the quantization APIs.


#####################################################
1.b. Skip symbolically trace the non-traceable code
Skip symbolically trace the non-traceable code
---------------------------------------------------
When we have some non-traceable code in the module, and this part of code doesn’t need to be quantized,
we can factor out this part of the code into a submodule and skip symbolically trace that submodule.
Expand Down Expand Up @@ -134,8 +137,7 @@ quantization code:
If the code that is not symbolically traceable needs to be quantized, we have the following two options:

##########################################################
2.a Refactor your code to make it symbolically traceable
Refactor your code to make it symbolically traceable
--------------------------------------------------------
If it is easy to refactor the code and make the code symbolically traceable,
we can refactor the code and remove the use of non-traceable constructs in python.
Expand Down Expand Up @@ -167,15 +169,10 @@ after:
return x.permute(0, 2, 1, 3)
quantization code:

This can be combined with other approaches and the quantization code
depends on the model.



#######################################################
2.b. Write your own observed and quantized submodule
Write your own observed and quantized submodule
-----------------------------------------------------

If the non-traceable code can’t be refactored to be symbolically traceable,
Expand Down Expand Up @@ -207,8 +204,8 @@ non-traceable logic, wrapped in a module
class FP32NonTraceable:
...
2. Define observed version of FP32NonTraceable
2. Define observed version of
FP32NonTraceable

.. code:: python
Expand Down

0 comments on commit 6dbcc40

Please sign in to comment.