Skip to content

Commit

Permalink
* Upgrade presets for PyTorch 2.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
saudet committed Apr 24, 2023
1 parent f10b1d5 commit 0c698a7
Show file tree
Hide file tree
Showing 220 changed files with 8,176 additions and 5,944 deletions.
6 changes: 6 additions & 0 deletions .github/actions/deploy-windows/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,16 @@ runs:
if "%CI_DEPLOY_PLATFORM%"=="windows-x86_64" if not "%CI_DEPLOY_NEED_CUDA%"=="" (
echo Installing CUDA, cuDNN, etc
curl -LO https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_522.06_windows.exe
curl -LO https://developer.download.nvidia.com/compute/cuda/12.0.0/local_installers/cuda_12.0.0_527.41_windows.exe
curl -LO https://developer.download.nvidia.com/compute/redist/cudnn/v8.7.0/local_installers/11.8/cudnn-windows-x86_64-8.7.0.84_cuda11-archive.zip
curl -LO http://www.winimage.com/zLibDll/zlib123dllx64.zip
cuda_11.8.0_522.06_windows.exe -s
bash -c "rm -Rf 'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8'"
bash -c "mv 'C:/Program Files/NVIDIA Corporation/NvToolsExt' 'C:/Program Files/NVIDIA Corporation/NvToolsExt_old'"
cuda_12.0.0_527.41_windows.exe -s
bash -c "mv 'C:/Program Files/NVIDIA Corporation/NvToolsExt_old' 'C:/Program Files/NVIDIA Corporation/NvToolsExt'"
bash -c "ls 'C:/Program Files/NVIDIA Corporation/NvToolsExt'"
unzip cudnn-windows-x86_64-8.7.0.84_cuda11-archive.zip
unzip zlib123dllx64.zip
move cudnn-windows-x86_64-8.7.0.84_cuda11-archive\bin\*.dll "%ProgramFiles%\NVIDIA GPU Computing Toolkit\CUDA\v12.0\bin"
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
* Map `c10::impl::GenericDict` as returned by `c10::IValue::toGenericDict()` in presets for PyTorch
* Introduce `linux-armhf` and `linux-x86` builds to presets for TensorFlow Lite ([pull #1268](https://github.com/bytedeco/javacpp-presets/pull/1268))
* Add presets for LibRaw 0.20.2 ([pull #1211](https://github.com/bytedeco/javacpp-presets/pull/1211))
* Upgrade presets for OpenCV 4.7.0, FFmpeg 6.0 ([issue bytedeco/javacv#1693](https://github.com/bytedeco/javacv/issues/1693)), HDF5 1.14.0, Hyperscan 5.4.1 ([issue #1308](https://github.com/bytedeco/javacpp-presets/issues/1308)), Spinnaker 3.0.0.118 ([pull #1313](https://github.com/bytedeco/javacpp-presets/pull/1313)), librealsense2 2.53.1 ([pull #1305](https://github.com/bytedeco/javacpp-presets/pull/1305)), MKL 2023.1, DNNL 2.7.3, OpenBLAS 0.3.23, ARPACK-NG 3.9.0, CPython 3.11.3, NumPy 1.24.2, SciPy 1.10.1, LLVM 16.0.1, Leptonica 1.83.0, Tesseract 5.3.1, CUDA 12.0.0, cuDNN 8.7.0, NCCL 2.16.2, OpenCL 3.0.13, NVIDIA Video Codec SDK 12.0.16, PyTorch 1.13.1, TensorFlow Lite 2.12.0, TensorRT 8.6.0.12, Triton Inference Server 2.32.0, DepthAI 2.21.2, ONNX Runtime 1.14.1, TVM 0.11.1, Bullet Physics SDK 3.25, and their dependencies
* Upgrade presets for OpenCV 4.7.0, FFmpeg 6.0 ([issue bytedeco/javacv#1693](https://github.com/bytedeco/javacv/issues/1693)), HDF5 1.14.0, Hyperscan 5.4.1 ([issue #1308](https://github.com/bytedeco/javacpp-presets/issues/1308)), Spinnaker 3.0.0.118 ([pull #1313](https://github.com/bytedeco/javacpp-presets/pull/1313)), librealsense2 2.53.1 ([pull #1305](https://github.com/bytedeco/javacpp-presets/pull/1305)), MKL 2023.1, DNNL 2.7.3, OpenBLAS 0.3.23, ARPACK-NG 3.9.0, CPython 3.11.3, NumPy 1.24.2, SciPy 1.10.1, LLVM 16.0.1, Leptonica 1.83.0, Tesseract 5.3.1, CUDA 12.0.0, cuDNN 8.7.0, NCCL 2.16.2, OpenCL 3.0.13, NVIDIA Video Codec SDK 12.0.16, PyTorch 2.0.0, TensorFlow Lite 2.12.0, TensorRT 8.6.0.12, Triton Inference Server 2.32.0, DepthAI 2.21.2, ONNX Runtime 1.14.1, TVM 0.11.1, Bullet Physics SDK 3.25, and their dependencies

### November 2, 2022 version 1.5.8
* Fix mapping of `torch::ExpandingArrayWithOptionalElem` in presets for PyTorch ([issue #1250](https://github.com/bytedeco/javacpp-presets/issues/1250))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Each child module in turn relies by default on the included [`cppbuild.sh` scrip
* NVIDIA Video Codec SDK 12.0.x https://developer.nvidia.com/nvidia-video-codec-sdk
* OpenCL 3.0.x https://github.com/KhronosGroup/OpenCL-ICD-Loader
* MXNet 1.9.x https://github.com/apache/incubator-mxnet
* PyTorch 1.13.x https://github.com/pytorch/pytorch
* PyTorch 2.0.x https://github.com/pytorch/pytorch
* TensorFlow 1.15.x https://github.com/tensorflow/tensorflow
* TensorFlow Lite 2.12.x https://github.com/tensorflow/tensorflow
* TensorRT 8.x https://developer.nvidia.com/tensorrt
Expand Down
2 changes: 1 addition & 1 deletion platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.13.1-${project.version}</version>
<version>2.0.0-${project.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco</groupId>
Expand Down
10 changes: 5 additions & 5 deletions pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Introduction
------------
This directory contains the JavaCPP Presets module for:

* PyTorch 1.13.1 https://pytorch.org/
* PyTorch 2.0.0 https://pytorch.org/

Please refer to the parent README.md file for more detailed information about the JavaCPP Presets.

Expand Down Expand Up @@ -48,28 +48,28 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.13.1-1.5.9-SNAPSHOT</version>
<version>2.0.0-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.13.1-1.5.9-SNAPSHOT</version>
<version>2.0.0-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>cuda-platform-redist</artifactId>
<version>11.8-8.6-1.5.9-SNAPSHOT</version>
<version>12.0-8.7-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled full version of MKL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>mkl-platform-redist</artifactId>
<version>2022.2-1.5.9-SNAPSHOT</version>
<version>2023.1-1.5.9-SNAPSHOT</version>
</dependency>
</dependencies>
<build>
Expand Down
6 changes: 4 additions & 2 deletions pytorch/cppbuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,24 @@ export CUDA_HOME="/usr/local/cuda"
export CUDNN_HOME="/usr/local/cuda"
export MAX_JOBS=$MAKEJ
export USE_CUDA=0
export USE_CUDNN=0
export USE_NUMPY=0
export USE_OPENMP=1
export USE_SYSTEM_NCCL=1
if [[ "$EXTENSION" == *gpu ]]; then
export USE_CUDA=1
export USE_CUDNN=1
export USE_FAST_NVCC=0
export CUDA_SEPARABLE_COMPILATION=OFF
export TORCH_CUDA_ARCH_LIST="3.5+PTX"
export TORCH_CUDA_ARCH_LIST="5.0+PTX"
fi

export PYTHON_BIN_PATH=$(which python3)
if [[ $PLATFORM == windows* ]]; then
export PYTHON_BIN_PATH=$(which python.exe)
fi

PYTORCH_VERSION=1.13.1
PYTORCH_VERSION=2.0.0

mkdir -p "$PLATFORM$EXTENSION"
cd "$PLATFORM$EXTENSION"
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/gpu/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.13.1-${project.parent.version}</version>
<version>2.0.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform GPU for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.13.1-${project.parent.version}</version>
<version>2.0.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch</artifactId>
<version>1.13.1-${project.parent.version}</version>
<version>2.0.0-${project.parent.version}</version>
<name>JavaCPP Presets for PyTorch</name>

<dependencies>
Expand Down
8 changes: 4 additions & 4 deletions pytorch/samples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,28 +12,28 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.13.1-1.5.9-SNAPSHOT</version>
<version>2.0.0-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.13.1-1.5.9-SNAPSHOT</version>
<version>2.0.0-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>cuda-platform-redist</artifactId>
<version>11.8-8.6-1.5.9-SNAPSHOT</version>
<version>12.0-8.7-1.5.9-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled full version of MKL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>mkl-platform-redist</artifactId>
<version>2022.2-1.5.9-SNAPSHOT</version>
<version>2023.1-1.5.9-SNAPSHOT</version>
</dependency>
</dependencies>
<build>
Expand Down
14 changes: 4 additions & 10 deletions pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,10 @@ public Adagrad(
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);

public Adagrad(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults);
public Adagrad(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);
public Adagrad(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults);
public Adagrad(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);

public native @ByVal Tensor step(@ByVal(nullValue = "torch::optim::Optimizer::LossClosure(nullptr)") LossClosure closure);
public native @ByVal Tensor step();
Expand Down
14 changes: 4 additions & 10 deletions pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,10 @@ public Adam(
@ByVal OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);
public Adam(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults);
public Adam(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);
public Adam(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults);
public Adam(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);

public native @ByVal Tensor step(@ByVal(nullValue = "torch::optim::Optimizer::LossClosure(nullptr)") LossClosure closure);
public native @ByVal Tensor step();
Expand Down
14 changes: 4 additions & 10 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,10 @@ public AdamW(
@ByVal OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);
public AdamW(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params,
@ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults);
public AdamW(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(
@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);
public AdamW(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params, @ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults);
public AdamW(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params) { super((Pointer)null); allocate(params); }
private native void allocate(@Cast({"", "std::vector<at::Tensor>"}) @StdMove TensorVector params);

public native @ByVal Tensor step(@ByVal(nullValue = "torch::optim::Optimizer::LossClosure(nullptr)") LossClosure closure);
public native @ByVal Tensor step();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,16 @@
* <pre>{@code
* AdaptiveAvgPool1d model(AdaptiveAvgPool1dOptions(5));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool1dImpl extends AdaptiveAvgPool1dImplBase {
static { Loader.load(); }


public AdaptiveAvgPool1dImpl(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size);
public AdaptiveAvgPool1dImpl(
@Const @ByRef AdaptiveAvgPool1dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool1dOptions options_);
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public AdaptiveAvgPool1dImpl(Pointer p) { super(p); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,10 @@ public class AdaptiveAvgPool1dImplBase extends AdaptiveAvgPool1dImplCloneable {
public AdaptiveAvgPool1dImplBase(Pointer p) { super(p); }

public AdaptiveAvgPool1dImplBase(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArray<1>*") LongPointer output_size);
public AdaptiveAvgPool1dImplBase(
@Const @ByRef AdaptiveAvgPool1dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool1dOptions options_);

public native void reset();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,16 @@
* <pre>{@code
* AdaptiveAvgPool2d model(AdaptiveAvgPool2dOptions({3, 2}));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool2dImpl extends AdaptiveAvgPool2dImplBase {
static { Loader.load(); }


public AdaptiveAvgPool2dImpl(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size);
public AdaptiveAvgPool2dImpl(
@Const @ByRef AdaptiveAvgPool2dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool2dOptions options_);
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public AdaptiveAvgPool2dImpl(Pointer p) { super(p); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ public class AdaptiveAvgPool2dImplBase extends AdaptiveAvgPool2dImplCloneable {
public AdaptiveAvgPool2dImplBase(Pointer p) { super(p); }

public AdaptiveAvgPool2dImplBase(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<2>*") LongOptional output_size);
public AdaptiveAvgPool2dImplBase(
@Const @ByRef AdaptiveAvgPool2dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool2dOptions options_);

public native void reset();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,16 @@
* <pre>{@code
* AdaptiveAvgPool3d model(AdaptiveAvgPool3dOptions(3));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool3dImpl extends AdaptiveAvgPool3dImplBase {
static { Loader.load(); }


public AdaptiveAvgPool3dImpl(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size);
public AdaptiveAvgPool3dImpl(
@Const @ByRef AdaptiveAvgPool3dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool3dOptions options_);
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public AdaptiveAvgPool3dImpl(Pointer p) { super(p); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ public class AdaptiveAvgPool3dImplBase extends AdaptiveAvgPool3dImplCloneable {
public AdaptiveAvgPool3dImplBase(Pointer p) { super(p); }

public AdaptiveAvgPool3dImplBase(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size) { super((Pointer)null); allocate(output_size); }
private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size);
@NoDeallocator private native void allocate(@ByVal @Cast("torch::ExpandingArrayWithOptionalElem<3>*") LongOptional output_size);
public AdaptiveAvgPool3dImplBase(
@Const @ByRef AdaptiveAvgPool3dOptions options_) { super((Pointer)null); allocate(options_); }
private native void allocate(
@NoDeallocator private native void allocate(
@Const @ByRef AdaptiveAvgPool3dOptions options_);

public native void reset();
Expand Down
Loading

0 comments on commit 0c698a7

Please sign in to comment.