Skip to content

Commit

Permalink
* Upgrade presets for PyTorch 1.9.0
Browse files Browse the repository at this point in the history
  • Loading branch information
saudet committed Jun 18, 2021
1 parent 9d8265f commit f1ea76f
Show file tree
Hide file tree
Showing 388 changed files with 7,234 additions and 3,888 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
* Build FFmpeg with libxml2, enabling support for DASH demuxing ([pull #1033](https://github.com/bytedeco/javacpp-presets/pull/1033)), and libsrt for SRT protocol support ([pull #1036](https://github.com/bytedeco/javacpp-presets/pull/1036))
* Add `@MemberGetter` for `av_log_default_callback()` in presets for FFmpeg ([issue #812](https://github.com/bytedeco/javacpp-presets/issues/812))
* Include `cudaGL.h` and `cuda_gl_interop.h` header files in presets for CUDA ([pull #1027](https://github.com/bytedeco/javacpp-presets/pull/1027))
* Add presets for libffi 3.3 ([issue #833](https://github.com/bytedeco/javacpp-presets/issues/833)), NVIDIA Video Codec SDK 11.0.10 ([pull #1020](https://github.com/bytedeco/javacpp-presets/pull/1020)), PyTorch 1.8.1 ([issue #623](https://github.com/bytedeco/javacpp-presets/issues/623)), TensorFlow Lite 2.5.0, DepthAI 2.5.0, ModSecurity ([pull #1012](https://github.com/bytedeco/javacpp-presets/pull/1012))
* Add presets for libffi 3.3 ([issue #833](https://github.com/bytedeco/javacpp-presets/issues/833)), NVIDIA Video Codec SDK 11.0.10 ([pull #1020](https://github.com/bytedeco/javacpp-presets/pull/1020)), PyTorch 1.9.0 ([issue #623](https://github.com/bytedeco/javacpp-presets/issues/623)), TensorFlow Lite 2.5.0, DepthAI 2.5.0, ModSecurity ([pull #1012](https://github.com/bytedeco/javacpp-presets/pull/1012))
* Map `std::vector<cv::Range>` to `RangeVector` in `opencv_core.Mat` for convenience ([issue bytedeco/javacv#1607](https://github.com/bytedeco/javacv/issues/1607))
* Include `genericaliasobject.h`, `context.h`, `tracemalloc.h`, and `datetime.h` for CPython ([issue #1017](https://github.com/bytedeco/javacpp-presets/issues/1017))
* Add samples using LLVM modules to deal with bitcode and object files ([pull #1016](https://github.com/bytedeco/javacpp-presets/pull/1016))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ Each child module in turn relies by default on the included [`cppbuild.sh` scrip
* NVIDIA Video Codec SDK 11.0.x https://developer.nvidia.com/nvidia-video-codec-sdk
* OpenCL 3.0 https://github.com/KhronosGroup/OpenCL-ICD-Loader
* MXNet 1.8.0 https://github.com/apache/incubator-mxnet
* PyTorch 1.8.x https://github.com/pytorch/pytorch
* PyTorch 1.9.x https://github.com/pytorch/pytorch
* TensorFlow 1.15.x https://github.com/tensorflow/tensorflow
* TensorFlow Lite 2.5.x https://github.com/tensorflow/tensorflow
* TensorRT 7.x https://developer.nvidia.com/tensorrt
Expand Down
2 changes: 1 addition & 1 deletion platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.8.1-${project.version}</version>
<version>1.9.0-${project.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco</groupId>
Expand Down
6 changes: 3 additions & 3 deletions pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Introduction
------------
This directory contains the JavaCPP Presets module for:

* PyTorch 1.8.1 https://pytorch.org/
* PyTorch 1.9.0 https://pytorch.org/

Please refer to the parent README.md file for more detailed information about the JavaCPP Presets.

Expand Down Expand Up @@ -46,14 +46,14 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.8.1-1.5.6-SNAPSHOT</version>
<version>1.9.0-1.5.6-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.8.1-1.5.6-SNAPSHOT</version>
<version>1.9.0-1.5.6-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down
2 changes: 1 addition & 1 deletion pytorch/cppbuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ if [[ "$EXTENSION" == *gpu ]]; then
export TORCH_CUDA_ARCH_LIST="3.5+PTX"
fi

PYTORCH_VERSION=1.8.1
PYTORCH_VERSION=1.9.0

mkdir -p "$PLATFORM$EXTENSION"
cd "$PLATFORM$EXTENSION"
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/gpu/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.8.1-${project.parent.version}</version>
<version>1.9.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform GPU for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.8.1-${project.parent.version}</version>
<version>1.9.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch</artifactId>
<version>1.8.1-${project.parent.version}</version>
<version>1.9.0-${project.parent.version}</version>
<name>JavaCPP Presets for PyTorch</name>

<dependencies>
Expand Down
4 changes: 2 additions & 2 deletions pytorch/samples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>1.8.1-1.5.6-SNAPSHOT</version>
<version>1.9.0-1.5.6-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>1.8.1-1.5.6-SNAPSHOT</version>
<version>1.9.0-1.5.6-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down
3 changes: 3 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,7 @@ public class AdagradOptions extends OptimizerCloneableAdagradOptions {



// NOLINTNEXTLINE(modernize-use-override)
public native double get_lr();
public native void set_lr(double lr);
}
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,5 @@ public class AdagradParamState extends OptimizerCloneableAdagradParamState {



// NOLINTNEXTLINE(modernize-use-override)
}
3 changes: 3 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,7 @@ public class AdamOptions extends OptimizerCloneableAdamOptions {



// NOLINTNEXTLINE(modernize-use-override)
public native double get_lr();
public native void set_lr(double lr);
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,5 @@ public class AdamParamState extends OptimizerCloneableAdamParamState {



// NOLINTNEXTLINE(modernize-use-override)
}
3 changes: 3 additions & 0 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,7 @@ public class AdamWOptions extends OptimizerCloneableAdamWOptions {



// NOLINTNEXTLINE(modernize-use-override)
public native double get_lr();
public native void set_lr(double lr);
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,5 @@ public class AdamWParamState extends OptimizerCloneableAdamWParamState {



// NOLINTNEXTLINE(modernize-use-override)
}
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveAvgPool1d model(AdaptiveAvgPool1dOptions(5));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool1dImpl extends AdaptiveAvgPool1dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveAvgPool1dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveAvgPool1dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveAvgPool2d model(AdaptiveAvgPool2dOptions({3, 2}));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool2dImpl extends AdaptiveAvgPool2dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveAvgPool2dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveAvgPool2dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveAvgPool3d model(AdaptiveAvgPool3dOptions(3));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveAvgPool3dImpl extends AdaptiveAvgPool3dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveAvgPool3dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveAvgPool3dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
* <pre>{@code
* AdaptiveLogSoftmaxWithLoss model(AdaptiveLogSoftmaxWithLossOptions(8, 10, {4, 8}).div_value(2.).head_bias(true));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveLogSoftmaxWithLossImpl extends AdaptiveLogSoftmaxWithLossImplCloneable {
static { Loader.load(); }
Expand All @@ -39,7 +40,7 @@ public class AdaptiveLogSoftmaxWithLossImpl extends AdaptiveLogSoftmaxWithLossIm

public AdaptiveLogSoftmaxWithLossImpl(@Cast("int64_t") long in_features, @Cast("int64_t") long n_classes, @ByVal @Cast("std::vector<int64_t>*") LongVector cutoffs) { super((Pointer)null); allocate(in_features, n_classes, cutoffs); }
@NoDeallocator private native void allocate(@Cast("int64_t") long in_features, @Cast("int64_t") long n_classes, @ByVal @Cast("std::vector<int64_t>*") LongVector cutoffs);

public AdaptiveLogSoftmaxWithLossImpl(@ByVal AdaptiveLogSoftmaxWithLossOptions options_) { super((Pointer)null); allocate(options_); }
@NoDeallocator private native void allocate(@ByVal AdaptiveLogSoftmaxWithLossOptions options_);

Expand All @@ -64,12 +65,12 @@ public class AdaptiveLogSoftmaxWithLossImpl extends AdaptiveLogSoftmaxWithLossIm
/** The options with which this {@code Module} was constructed */
public native @ByRef AdaptiveLogSoftmaxWithLossOptions options(); public native AdaptiveLogSoftmaxWithLossImpl options(AdaptiveLogSoftmaxWithLossOptions setter);

/** Cutoffs used to assign targets to their buckets. It should be an ordered Sequence
/** Cutoffs used to assign targets to their buckets. It should be an ordered Sequence
* of integers sorted in the increasing order */
public native @ByRef @Cast("std::vector<int64_t>*") LongVector cutoffs(); public native AdaptiveLogSoftmaxWithLossImpl cutoffs(LongVector setter);

public native @Cast("int64_t") long shortlist_size(); public native AdaptiveLogSoftmaxWithLossImpl shortlist_size(long setter);

/** Number of clusters */
public native @Cast("int64_t") long n_clusters(); public native AdaptiveLogSoftmaxWithLossImpl n_clusters(long setter);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveLogSoftmaxWithLossImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveLogSoftmaxWithLossImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveMaxPool1d model(AdaptiveMaxPool1dOptions(3));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveMaxPool1dImpl extends AdaptiveMaxPool1dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveMaxPool1dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveMaxPool1dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveMaxPool2d model(AdaptiveMaxPool2dOptions({3, 2}));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveMaxPool2dImpl extends AdaptiveMaxPool2dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveMaxPool2dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveMaxPool2dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AdaptiveMaxPool3d model(AdaptiveMaxPool3dOptions(3));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdaptiveMaxPool3dImpl extends AdaptiveMaxPool3dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AdaptiveMaxPool3dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AdaptiveMaxPool3dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AlphaDropout model(AlphaDropoutOptions(0.2).inplace(true));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AlphaDropoutImpl extends AlphaDropoutImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AlphaDropoutImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AlphaDropoutImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
6 changes: 3 additions & 3 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AnyModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -128,9 +128,9 @@ public class AnyModule extends Pointer {

/** Move construction and assignment is allowed, and follows the default
* behavior of move for {@code std::unique_ptr}. */
public AnyModule(@ByVal AnyModule arg0) { super((Pointer)null); allocate(arg0); }
private native void allocate(@ByVal AnyModule arg0);
public native @ByRef @Name("operator =") AnyModule put(@ByVal AnyModule arg0);
public AnyModule(@ByRef(true) AnyModule arg0) { super((Pointer)null); allocate(arg0); }
private native void allocate(@ByRef(true) AnyModule arg0);
public native @ByRef @Name("operator =") AnyModule put(@ByRef(true) AnyModule arg0);

/** Creates a shallow copy of an {@code AnyModule}. */

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

// A RAII, thread local (!) guard that enables or disables grad mode upon
// construction, and sets it back to the original value upon destruction.
@Namespace("at") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
@Namespace("c10") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AutoGradMode extends Pointer {
static { Loader.load(); }
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ public class AutogradMeta extends AutogradMetaInterface {
public native @SharedPtr ForwardGrad fw_grad_(); public native AutogradMeta fw_grad_(ForwardGrad setter);

public native @ByRef FunctionPreVector hooks_(); public native AutogradMeta hooks_(FunctionPreVector setter);
public native @Cast("torch::autograd::hooks_list*") @StdVector @SharedPtr PointerPointer cpp_hooks_list(); public native AutogradMeta cpp_hooks_list(PointerPointer setter);
public native @Cast("torch::autograd::hooks_list*") @StdVector @SharedPtr PointerPointer cpp_hooks_list_(); public native AutogradMeta cpp_hooks_list_(PointerPointer setter);

// Only meaningful on leaf variables (must be false otherwise)
public native @Cast("bool") boolean requires_grad_(); public native AutogradMeta requires_grad_(boolean setter);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,16 @@ public class AutogradMetaInterface extends Pointer {
/** Pointer cast constructor. Invokes {@link Pointer#Pointer(Pointer)}. */
public AutogradMetaInterface(Pointer p) { super(p); }

public native void set_requires_grad(@Cast("bool") boolean requires_grad, TensorImpl self_impl);
public native void set_requires_grad(
@Cast("bool") boolean requires_grad,
TensorImpl self_impl);
public native @Cast("bool") boolean requires_grad();
public native @ByRef Tensor mutable_grad();
public native @Const @ByRef Tensor grad();
public native @Const @ByRef Tensor fw_grad(@Cast("uint64_t") long level, @Const @ByRef Tensor self);
public native void set_fw_grad(@Const @ByRef Tensor new_grad, @Const @ByRef Tensor self, @Cast("uint64_t") long level, @Cast("bool") boolean is_inplace_op);
public native void set_fw_grad(
@Const @ByRef Tensor new_grad,
@Const @ByRef Tensor self,
@Cast("uint64_t") long level,
@Cast("bool") boolean is_inplace_op);
}
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
* <pre>{@code
* AvgPool1d model(AvgPool1dOptions(3).stride(2));
* }</pre> */
// NOLINTNEXTLINE(bugprone-exception-escape)
@Namespace("torch::nn") @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AvgPool1dImpl extends AvgPool1dImplBase {
static { Loader.load(); }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ public class AvgPool1dImplCloneable extends Module {
/** {@code reset()} must perform initialization of all members with reference
* semantics, most importantly parameters, buffers and submodules. */
public native void reset();
@Override public Module asModule() { return asModule(this); }
@Namespace public static native @Name("static_cast<torch::nn::Module*>") Module asModule(AvgPool1dImplCloneable module);

/** Performs a recursive "deep copy" of the {@code Module}, such that all parameters
* and submodules in the cloned module are different from those in the
Expand Down
Loading

0 comments on commit f1ea76f

Please sign in to comment.